Evaluation method?
instrucTrans
λͺ¨λΈμ λ²μ κ²°κ³Όλ₯Ό νκ°νκΈ° μν μ½λλ νΉμ μμκΉμ?
ν
μ€νΈκ°μ μ μ¬λλ₯Ό νκ°νλ κ²μΈκ°μ?
μλ₯Όλ€μ΄ μλ μΌμ΄μ€μ κ²½μ° ko_ref
μ InstrucTrans
κ°μ μ μ¬λ νκ°λ₯Ό μ§ννλ건κ°μ?
"en_ref":"This controversy arose around a new advertisement for the latest iPad Pro that Apple released on YouTube on the 7th. The ad shows musical instruments, statues, cameras, and paints being crushed in a press, followed by the appearance of the iPad Pro in their place. It appears to emphasize the new iPad Pro's artificial intelligence features, advanced display, performance, and thickness. Apple mentioned that the newly unveiled iPad Pro is equipped with the latest 'M4' chip and is the thinnest device in Apple's history. The ad faced immediate backlash upon release, as it graphically depicts objects symbolizing creators being crushed. Critics argue that the imagery could be interpreted as technology trampling on human creators. Some have also voiced concerns that it evokes a situation where creators are losing ground due to AI."
"ko_ref":"μ΄λ² λ
Όλμ μ νμ΄ μ§λ 7μΌ μ νλΈμ 곡κ°ν μ ν μμ΄ν¨λ νλ‘ κ΄κ³ λ₯Ό λλ¬μΈκ³ λΆκ±°μ‘λ€. ν΄λΉ κ΄κ³ μμμ μ
κΈ°μ μ‘°κ°μ, μΉ΄λ©λΌ, λ¬Όκ° λ±μ μμ°©κΈ°λ‘ μ§λλ₯Έ λ€ κ·Έ μ리μ μμ΄ν¨λ νλ‘λ₯Ό λ±μ₯μν€λ λ΄μ©μ΄μλ€. μ ν μμ΄ν¨λ νλ‘μ μΈκ³΅μ§λ₯ κΈ°λ₯λ€κ³Ό μ§νλ λμ€νλ μ΄μ μ±λ₯, λκ» λ±μ κ°μ‘°νκΈ° μν μ·¨μ§λ‘ νμ΄λλ€. μ νμ μ΄λ²μ 곡κ°ν μμ΄ν¨λ νλ‘μ μ ν βM4β μΉ©μ΄ νμ¬λλ©° λκ»λ μ νμ μλ μ ν μ€ κ°μ₯ μλ€λ μ€λͺ
λ λ§λΆμλ€. κ΄κ³ λ κ³΅κ° μ§ν κ±°μΌ λΉνμ μ§λ©΄νλ€. μ°½μμλ₯Ό μμ§νλ λ¬Όκ±΄μ΄ μ§λλ €μ§λ κ³Όμ μ μ§λμΉκ² μ λλΌνκ² λ¬μ¬ν μ μ΄ λ¬Έμ κ° λλ€. κΈ°μ μ΄ μΈκ° μ°½μμλ₯Ό μ§λ°λ λͺ¨μ΅μ λ¬μ¬ν κ²μΌλ‘ ν΄μλ μ¬μ§κ° μλ€λ λ¬Έμ μμμ΄λ€. μΈκ³΅μ§λ₯(AI)μΌλ‘ μΈν΄ μ°½μμκ° μ€ μλ¦¬κ° μ€μ΄λλ μν©μ μ°μμν¨λ€λ λͺ©μ리λ λμλ€."
"InstrucTrans":"μ΄λ² λ
Όλμ μ νμ΄ μ§λ 7μΌ μ νλΈμ 곡κ°ν μ΅μ μμ΄ν¨λ νλ‘ κ΄κ³ λ₯Ό μ€μ¬μΌλ‘ λΆκ±°μ‘λ€. μ΄ κ΄κ³ λ μ
κΈ°, μ‘°κ°μ, μΉ΄λ©λΌ, λ¬Όκ° λ±μ λλ₯΄κΈ° μμνλ μ₯λ©΄κ³Ό ν¨κ» κ·Έ μ리μ μμ΄ν¨λ νλ‘κ° λ±μ₯νλ μ₯λ©΄μ 보μ¬μ€λ€. μ΄λ μλ‘μ΄ μμ΄ν¨λ νλ‘μ μΈκ³΅μ§λ₯ κΈ°λ₯, κ³ κΈ λμ€νλ μ΄, μ±λ₯, λκ»λ₯Ό κ°μ‘°νλ κ²μΌλ‘ 보μΈλ€. μ νμ μ΄λ²μ 곡κ°ν μμ΄ν¨λ νλ‘μ μ΅μ 'M4' μΉ©μ΄ νμ¬λμΌλ©°, μ ν μμ¬μ κ°μ₯ μμ κΈ°κΈ°λΌκ³ μΈκΈνλ€. μ΄ κ΄κ³ λ μΆμνμλ§μ ν¬λ¦¬μμ΄ν°λ₯Ό μμ§νλ λ¬Όκ±΄μ΄ νμλλ μ₯λ©΄μ΄ κ·Έλλ‘ κ·Έλ €μ Έ λ
Όλμ΄ λκ³ μλ€. λΉνκ°λ€μ μ΄ μ΄λ―Έμ§κ° κΈ°μ μ΄ μΈκ° ν¬λ¦¬μμ΄ν°λ₯Ό μ§λ°λλ€λ μλ―Έλ‘ ν΄μλ μ μλ€κ³ μ£Όμ₯νλ€. λν AIλ‘ μΈν΄ ν¬λ¦¬μμ΄ν°λ€μ΄ λ°λ¦¬κ³ μλ€λ μν©μ μ°μμν¨λ€λ μ°λ €μ λͺ©μ리λ λμ¨λ€."
μλ
νμΈμ. λ΅μ₯μ΄ λ¦μ΄ μ£μ‘ν©λλ€. νκ°λ ko_refμ model predictionμ λν΄μ ScareBLEUλ₯Ό μ¬μ©νμμ΅λλ€.
μ€νμ μ¬μ©ν μΆλ‘ μ½λμ νκ° μ½λ 곡μ λ립λλ€. κ°μ¬ν©λλ€.
python inference_translation_eeve.py -g 3 -d "eval_dataset/flores.csv" -m "yanolja/EEVE-Korean-Instruct-10.8B-v1.0"
python inference_translation_seagull.py -g 3 -d "eval_dataset/flores.csv" -m "kuotient/Seagull-13b-translation"
python inference_translation_kullm.py -g 3 -d "eval_dataset/flores.csv" -m "nlpai-lab/KULLM3"
python inference_translation_synatra.py -g 3 -d "eval_dataset/flores.csv" -m "maywell/Synatra-7B-v0.3-Translation"
# python inference_translation_base.py
import os
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--model_name", type=str, default="meta-llama/Meta-Llama-3-8B-Instruct")
parser.add_argument("-d", "--dataset_path", type=str, default="gemini/ko-eng-dataset.csv")
parser.add_argument("-g", "--gpu_id", type=int, default=0)
args = parser.parse_args()
print(args)
os.environ["CUDA_VISIBLE_DEVICES"]=str(args.gpu_id)
import torch
import evaluate
import pandas as pd
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(args.model_name)
# tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
args.model_name,
# device_map="auto",
torch_dtype=torch.bfloat16,
).to('cuda')
model.eval()
def apply_template(example):
SYSTEM_PROMPT=f"λΉμ μ λ²μκΈ° μ
λλ€. μμ΄λ₯Ό νκ΅μ΄λ‘ λ²μνμΈμ." # ours
conversation = {"messages": [
{'role': 'system', 'content': SYSTEM_PROMPT},
{'role': 'user', 'content':example["en_ref"]}
]}
return conversation
# datasets
tc_dataset = load_dataset("csv", data_files=args.dataset_path, split="train")
dataset = tc_dataset.map(apply_template, remove_columns=tc_dataset.features, batched=False, num_proc=64)
print(dataset)
# inference
output_list = []
for idx, data in enumerate(dataset):
inputs = tokenizer.apply_chat_template(data['messages'],tokenize=True, add_generation_prompt=True, return_tensors='pt').to("cuda")
# print(tokenizer.batch_decode(inputs))
outputs = model.generate(inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=512)
output_decode = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
print(f'{idx}:',output_decode)
output_list.append(output_decode)
df = pd.DataFrame(tc_dataset)
df['ko_pred']=output_list
df = df[['ko_pred', 'ko_ref', 'en_ref', 'source']]
model_name = args.model_name.split('/')[-1]
output_path = 'inference_' + args.dataset_path.split('.')[-2]
print(output_path)
os.makedirs(output_path, exist_ok=True)
df.to_json(f'{output_path}/{model_name}_eval_result.json', lines=True, orient='records', force_ascii=False)
python eval_translation.py -i inference_eval_dataset/ko_news_eval40/nllb-finetuned-en2ko_eval_result.json
python eval_translation.py -i inference_eval_dataset/ko_news_eval40/EEVE-Korean-Instruct-10.8B-v1.0_eval_result.json
python eval_translation.py -i inference_eval_dataset/ko_news_eval40/Synatra-7B-v0.3-Translation_eval_result.json
python eval_translation.py -i inference_eval_dataset/ko_news_eval40/KULLM3_eval_result.json
# python eval_translation.py
import os
import argparse
import evaluate
import pandas as pd
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--inference_path", type=str, default="result/nayohanllama3-8b-it-translation-271k_eval_result.json")
args = parser.parse_args()
print(args)
# evaluate sacrebleu
metric = evaluate.load("sacrebleu")
def compute_metrics(eval_preds):
decoded_preds, decoded_labels = eval_preds
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
result = {k: round(v, 2) for k, v in result.items()}
return result
# eval result to json
df = pd.read_json(args.inference_path, lines=True, orient='records')
result = []
for source in df['source'].unique():
df_source = df[df['source']==source].reset_index(drop=True)
eval_preds = [df_source['ko_pred'], df_source['ko_ref']]
eval_result = compute_metrics(eval_preds)
# print(eval_result)
eval_result['source'] = source
result.append(eval_result)
output_df = pd.DataFrame(result, columns=['source', 'bleu'])
output_df = output_df.sort_values(by=['source'])
print(output_df)
output_path = '/'.join(args.inference_path.split('/')[:-1]) + '/eval'
output_file = args.inference_path.split('/')[-1]
os.makedirs(output_path, exist_ok=True)
output_df.to_json(f'{output_path}/{output_file}', lines=True, orient='records', force_ascii=False)
make_eval_dataset.py
import pandas as pd
from datasets import load_dataset
# flores
eval_dataset = load_dataset('traintogpb/aihub-flores-koen-integrated-sparta-30k')
df = pd.DataFrame(eval_dataset['test'])
df = df.drop('ko_ref_xcomet', axis=1)
df.to_csv('eval_dataset/flores.csv', index=False)
# iwlst2023
iwlst_en_ko_ban = load_dataset('shreevigneshs/iwslt-2023-en-ko-train-val-split-0.1', split='f_test')
iwlst_en_ko_zon = load_dataset('shreevigneshs/iwslt-2023-en-ko-train-val-split-0.1', split='if_test')
df = iwlst_en_ko_ban.to_pandas()
df = df[["en", "ko"]]
df.columns=["en_ref", "ko_ref"]
df['source'] = 'iwlst_en_ko_ban'
df.to_csv('iwlst_en_ko_banmal.csv', index=False)#, encoding='utf-8-sig')
print(df)
df = iwlst_en_ko_zon.to_pandas()
df = df[["en", "ko"]]
df.columns=["en_ref", "ko_ref"]
df['source'] = 'iwlst_en_ko_zon'
df.to_csv('iwlst_en_ko_zondae.csv', index=False)#, encoding='utf-8-sig')
print(df)