Distributed Representations of Words and Phrases and their Compositionality
Mikolov, Tomas, et al. "Distributed representations of words and phrases and their compositionality." Advances in neural information processing systems 26 (2013).
Abstract (Eng.)
The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of “Canada” and “Air” cannot be easily combined to obtain “Air Canada”. Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.
Abstract (Kor.)
์ต๊ทผ ๋์ ๋ ์ฐ์ ์คํต๊ทธ๋จ ๋ชจ๋ธ์ ๋ง์ ์์ ์ ๋ฐํ ๊ตฌ๋ฌธ ๋ฐ ์๋ฏธ์ด ๊ด๊ณ๋ฅผ ํฌ์ฐฉํ๋ ๊ณ ํ์ง ๋ถ์ฐ ๋ฒกํฐ ํํ์ ํ์ตํ๋ ํจ์จ์ ์ธ ๋ฐฉ๋ฒ์ ๋๋ค. ๋ณธ ๋ ผ๋ฌธ์์๋ ๋ฒกํฐ์ ํ์ง๊ณผ ๊ต์ก ์๋๋ฅผ ํฅ์์ํค๋ ๋ช ๊ฐ์ง ํ์ฅ์ ์ ์ํฉ๋๋ค. ๋น๋ฒํ ๋จ์ด๋ค์ ์๋ธ์ํ๋ง์ ํตํด ์ฐ๋ฆฌ๋ ์๋นํ ์๋๋ฅผ ์ป๊ณ ๋ํ ๋ณด๋ค ๊ท์น์ ์ธ ๋จ์ด ํํ์ ๋ฐฐ์๋๋ค. ๋ํ ์์ฑ ์ํ๋ง์ด๋ผ๊ณ ํ๋ ๊ณ์ธต์ ์ํํธ๋งฅ์ค์ ๋ํ ๊ฐ๋จํ ๋์์ ์ค๋ช ํฉ๋๋ค. ๋จ์ด ํํ์ ๋ณธ์ง์ ์ธ ํ๊ณ๋ ์ด์์ ๋ํ ๋ฌด๊ด์ฌ๊ณผ ๊ด์ฉ๊ตฌ๋ฅผ ํํํ ์ ์๋ค๋ ๊ฒ์ ๋๋ค. ์๋ฅผ ๋ค์ด, "Canada"์ "Air"์ ์๋ฏธ๋ "Air Canada"๋ฅผ ์ป๊ธฐ ์ํด ์ฝ๊ฒ ์กฐํฉ๋ ์ ์์ต๋๋ค. ์ด ์์์ ์๊ฐ์ ๋ฐ์ ํ ์คํธ์์ ๋ฌธ๊ตฌ๋ฅผ ์ฐพ๋ ๊ฐ๋จํ ๋ฐฉ๋ฒ์ ์ ์ํ๊ณ ์๋ฐฑ๋ง ๊ตฌ์ ๋ํ ์ข์ ๋ฒกํฐ ํํ์ ํ์ตํ ์ ์์์ ๋ณด์ฌ์ค๋๋ค.
Efficient Estimation of Word Representations in Vector Space
Mikolov, Tomas, et al. "Efficient estimation of word representations in vector space." arXiv preprint arXiv:1301.3781 (2013).
Abstract (Eng.)
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.
Abstract (Kor.)
์ฐ๋ฆฌ๋ ๋งค์ฐ ํฐ ๋ฐ์ดํฐ ์ธํธ์์ ๋จ์ด์ ์ฐ์ ๋ฒกํฐ ํํ์ ๊ณ์ฐํ๊ธฐ ์ํ ๋ ๊ฐ์ง ์๋ก์ด ๋ชจ๋ธ ์ํคํ ์ฒ๋ฅผ ์ ์ํฉ๋๋ค. ์ด๋ฌํ ํํ์ ํ์ง์ ๋จ์ด ์ ์ฌ์ฑ ์์ ์์ ์ธก์ ๋๋ฉฐ, ๊ฒฐ๊ณผ๋ ๋ค๋ฅธ ์ ํ์ ์ ๊ฒฝ๋ง์ ๊ธฐ๋ฐ์ผ๋ก ํ ์ด์ ์ ์ต๊ณ ์ฑ๋ฅ ๊ธฐ์ ๊ณผ ๋น๊ต๋ฉ๋๋ค. ํจ์ฌ ๋ฎ์ ๊ณ์ฐ ๋น์ฉ์ผ๋ก ์ ํ๋๊ฐ ํฌ๊ฒ ํฅ์๋์์ต๋๋ค. ์ฆ, 16์ต ๋จ์ด ๋ฐ์ดํฐ ์ธํธ์์ ๊ณ ํ์ง ๋จ์ด ๋ฒกํฐ๋ฅผ ํ์ตํ๋ ๋ฐ ํ๋ฃจ๋ ๊ฑธ๋ฆฌ์ง ์์ต๋๋ค. ๋ํ ์ด๋ฌํ ๋ฒกํฐ๊ฐ ๊ตฌ๋ฌธ ๋ฐ ์๋ฏธ๋ก ์ ๋จ์ด ์ ์ฌ์ฑ์ ์ธก์ ํ๊ธฐ ์ํ ํ ์คํธ ์ธํธ์์ ์ต์ฒจ๋จ ์ฑ๋ฅ์ ์ ๊ณตํ๋ค๋ ๊ฒ์ ๋ณด์ฌ์ค๋๋ค.
Word2Vec์ ๊ธฐ๋ณธ ๊ตฌ์กฐ
์ฒซ๋ฒ์งธ ๋ ผ๋ฌธ → CBOW, Skip-gram
๋๋ฒ์งธ ๋ ผ๋ฌธ → Negative Sampling
CBOW → ์ฃผ๋ณ์ ๋ฌธ๋งฅ ๋จ์ด(Context Word)๋ฅผ ๊ฐ์ง๊ณ ํ๊น ๋จ์ด(Target Word)๋ฅผ ๋งํ๋ ๊ณผ์ ์์ ํ์ต
Skip-gram → ํ๊น ๋จ์ด(Target Word)๋ฅผ ๊ฐ์ง๊ณ ์ฃผ๋ณ ๋ฌธ๋งฅ ๋จ์ด(Context Word)๊ฐ ๋ฌด์์ผ์ง ์์ธกํ๋ ๊ณผ์ ์์ ํ์ต
์ฝ๊ฐ ์ธ์ฝ๋ฉ ๋์ฝ๋ฉ ๊ฐ์ ๋๋
์ด์ NPLM๊น์ง๋ง ํด๋ ํ๋ ฌ ๊ณฑ ์ฐ์ฐ์ ํด์ ์์ฒญ ์ค๋๊ฑธ๋ ธ๋๋ฐ, skip-gram์ ํ๋ ฌ๊ณฑ ์ฐ์ฐ์ ์ํํ์ง ์์์ ๋งค์ฐ ํจ์จ์ ์.
์๋์ฐ(Window) ๊ฐ๋
ํ๊น ๋จ์ด๋ฅผ ์ค์ฌ์ผ๋ก ์๋ค ๋ช๊ฐ์ ๋จ์ด๋ฅผ ์ดํด๋ณผ ๊ฒ์ธ์ง(=๋ฌธ๋งฅ ๋จ์ด๋ก ๋ ๊ฒ์ธ์ง) ์ ํ๋ ๋ฒ์.
์ฌ๋ฌ๋จ์ด๋ก ๊ตฌ์ฑ๋ ๋ฌธ์ฅ์์ ํ๋จ์ด์ฉ ์์ผ๋ก ์ฎ๊ฒจ๊ฐ๋ฉฐ ํ๊น ๋จ์ด๋ฅผ ๋ฐ๊พธ์ด ๋๊ฐ๋ ๋ฐฉ์์ ์ฌ๋ผ์ด๋ฉ ์๋์ฐ(sliding window).
CBOW
“I like natural language processing” ์ด๋ผ๋ ์์ ๋ฌธ์ฅ์ด ์์ ๋
→ natural๋ฅผ ํ์ผ๋จ์ด๋ก ๋๊ณ ์๋์ฐ ์ฌ์ด์ฆ 2๋ผ๊ณ ํ๋ค๋ฉด,
→ {I, like, language, processing}์ด input ๋ฐ์ดํฐ๊ฐ ๋จ.
๋๋ค์ผ๋ก ์ด๊ธฐํ๋ ํ๋ ๋ ์ด์ด์ ํ๋ผ๋ฏธํฐ ์ถ๋ ฅ ๊ฐ์ด {natural}์ด ๋๋๋ก ํ์ต๋จ.
Input๊ณผ Output
- ์ ๋ถ one-hot encoding ๋ฒกํฐ.
- 1*|V| (์ดํ ์งํฉ์ ํฌ๊ธฐ) ๋ฅผ ํฌ๊ธฐ๋ฅผ ๊ฐ๋ ๋ฒกํฐ.
W์ W’
- ๋ชจ๋ธ์์ ํ์ต๋๋ ํ๋ผ๋ฏธํฐ ํ๋ ฌ.
- |V| * d (์๋ฒ ๋ฉ ์ฐจ์์)
h
- h๋ก ํํ๋๋ ๋ ํ๋ ฌ์ ๊ณฑ์ Wํ๋ ฌ์ ํ ํ๊ณผ ๋์ผ
- h๋ ๊ฐ ๋จ์ด ๋ฒกํฐ์ W ํ๋ ฌ์ ๊ณฑํ ๊ฐ๋ค์ ํ๊ท ์ ์ฌ์ฉ
- Hidden Layer.
- W์ W’ ๋ ํ๋ ฌ์ ๊ณฑ์ ํตํด ๋ชจ๋ ๋จ์ด |V|๊ฐ์ ๋ํด ์ ์๋ฅผ ๊ณ์ฐํ๊ณ ,
- ๋ง์ง๋ง์ผ๋ก ์ด ์์ธก ์ ์๋ฅผ ํ๋ฅ ๊ฐ์ผ๋ก ๋ฐ๊พธ๊ธฐ ์ํด softmax๋ฅผ ์ทจํด์ค.
- ๊ฐ ๋จ์ด์ ์ ์์ ๋น๋กํด ์ ์๋ฅผ ํ๋ฅ ๋ก ๋ฐ๊พธ์ด์ค.
Skip-gram
“I like natural language processing” ์ด๋ผ๋ ์์ ๋ฌธ์ฅ์ด ์์ ๋
→ natural๋ฅผ ํ์ผ๋จ์ด๋ก ๋๊ณ ์๋์ฐ ์ฌ์ด์ฆ 2๋ผ๊ณ ํ๋ค๋ฉด,
→ {natural | I}, {natural | like}, {natural | language}, {natural | processing}์ด input data.
ํ์ผ ๋จ์ด {natrual}์ ์ด 4๋ฒ์ ํ์ต์ ํ๊ฒ๋.(CBOW๋ 1๋ฒ์ ํ์ต์ ํจ)
๊ฐ์ ํฌ๊ธฐ์ ๋ง๋ญ์น์ ๋ํด Skip-gram์ ํ์ต๋์ด ๋ํฌ๊ธฐ์ ์๋ฒ ๋ฉ ํ์ง์ด ๋ ์ข๊ณ , ๋ฐ๋ผ์ CBOW๋ณด๋ค๋ Skip-gram๋ฐฉ์์ ๋ง์ด ์ฌ์ฉํจ.
Input๊ณผ Output
- ์ ๋ถ one-hot encoding ๋ฒกํฐ.
- 1*|V| (์ดํ ์งํฉ์ ํฌ๊ธฐ) ๋ฅผ ํฌ๊ธฐ๋ฅผ ๊ฐ๋ ๋ฒกํฐ.
W์ W’
- ๋ชจ๋ธ์์ ํ์ต๋๋ ํ๋ผ๋ฏธํฐ ํ๋ ฌ.
- |V| * d (์๋ฒ ๋ฉ ์ฐจ์์)
u
W’์ ์ด๋ฒกํฐ
v
W์ ํ๋ฒกํฐ
์ข๋ณ → ์กฐ๊ฑด๋ถ ํ๋ฅ . ํ๊ฒ๋จ์ด C๊ฐ ์ฃผ์ด์ก์ ๋, ๋ฌธ๋งฅ๋จ์ด O๊ฐ ๋ํ๋ ํ๋ฅ .
๋ถ์๋ฅผ ํค์ธ๋ ค๋ฉด ํ๊น ๋จ์ด์ ํด๋นํ๋ ๋ฒกํฐ์ ๋ฌธ๋งฅ ๋จ์ด์ ํด๋นํ๋ ๋ฒกํฐ์ ๋ด์ ์ ํค์ด๋ค.
๋ฒกํฐ์ ๋ด์ = ์ฝ์ฌ์ธ ๊ฐ → ๋ ๋จ์ด์ ์ ์ฌ๋ ๋์ด๋ ๊ฒ.
ํ๋ผ๋ฏธํฐ ํ๋ ฌ W๋ ๋จ์ด ์๋ฒ ๋ฉ ๋ฒกํฐ๊ฐ ๋ชจ์ธ ๊ฒ์ผ๋ก Word2Vec์ ์ต์ข ๊ฒฐ๊ณผ๋ฌผ.
ํ๊น ๋จ์ด๋ฅผ ์ ๋ ฅ๋ฐ์ ๋ฌธ๋งฅ ๋จ์ด๋ฅผ ์ถ๋ ฅํ๋ ๋ชจ๋ธ์ ํ์ตํ๋ค๋ ๊ฒ์ ์ ๋ต ๋ฌธ๋งฅ ๋จ์ด๊ฐ ๋ํ๋ ํ๋ฅ ์ ๋์ด๊ณ ๋๋จธ์ง ๋จ์ด๋ค์ ํ๋ฅ ์ ๋ฎ์ถฐ์ฃผ๋ ๊ณผ์ . ๋จ์ด์๊ฐ ๋์ด๋ ์๋ก ๊ณ์ฐ๋์ด ์ฆ๊ฐํ๊ฒ ๋๋ค. ์ด๋ฐ ๋ฌธ์ ์ ์ ํด๊ฒฐํ๊ธฐ ์ํ ๋ฐฉ๋ฒ์ด ๋ค๊ฑฐํฐ๋ธ ์ํ๋ง.
์ฝ๋ ๊ตฌํ
""" CBOW """
## using pytorch
import torch
import torch.nn as nn
EMBEDDING_DIM = 128
EPOCHS = 100
example_sentence = """
Chang Choi is currently an Assistant Professor in the Department of Computer Engineering at Gachon University, Seongnam, Korea, Since 2020.
He received B.S., M.S. and Ph.D. degrees in Computer Engineering from Chosun University in 2005, 2007, and 2012, respectively.
he was a research professor at the same university.
He was awarded the academic awards from the graduate school of Chosun University in 2012.
""".split()
#(1) ์
๋ ฅ๋ฐ์ ๋ฌธ์ฅ์ ๋จ์ด๋ก ์ชผ๊ฐ๊ณ , ์ค๋ณต์ ์ ๊ฑฐํด์ค๋๋ค.
vocab = set(example_sentence)
vocab_size = len(example_sentence)
#(2) ๋จ์ด : ์ธ๋ฑ์ค, ์ธ๋ฑ์ค : ๋จ์ด๋ฅผ ๊ฐ์ง๋ ๋์
๋๋ฆฌ๋ฅผ ์ ์ธํด ์ค๋๋ค.
word_to_index = {word:index for index, word in enumerate(vocab)}
index_to_word = {index:word for index, word in enumerate(vocab)}
#3) ํ์ต์ ์ํ ๋ฐ์ดํฐ๋ฅผ ์์ฑํด ์ค๋๋ค.
# convert context to index vector
def make_context_vector(context, word_to_ix):
idxs = [word_to_ix[w] for w in context] #choi chang is currently ๋ฃ์ผ๋ฉด ์ธ๋ฑ์ค๊ฐ์ผ๋ก [n,n,n,n](n=์ ์)ํ์์ผ๋ก
return torch.tensor(idxs, dtype=torch.long)
# make dataset function
def make_data(sentence):
data = []
for i in range(2, len(example_sentence) - 2):
context = [example_sentence[i - 2],
example_sentence[i - 1],
example_sentence[i + 1],
example_sentence[i + 2]
]
target = example_sentence[i]
data.append((context, target))
return data
data = make_data(example_sentence)
#(4) CBOW ๋ชจ๋ธ์ ์ ์ํด ์ค๋๋ค.
class CBOW(nn.Module):
def __init__(self, vocab_size, embedding_dim):
super(CBOW, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.layer1 = nn.Linear(embedding_dim, 64)
self.activation1 = nn.ReLU()
self.layer2 = nn.Linear(64, vocab_size)
self.activation2 = nn.LogSoftmax(dim = -1)
def forward(self, inputs):
embeded_vector = sum(self.embeddings(inputs)).view(1,-1) #(1,128)
output = self.activation1(self.layer1(embeded_vector))
output = self.activation2(self.layer2(output))
return output
#(5) ๋ชจ๋ธ์ ์ ์ธํด์ฃผ๊ณ , loss function, optimizer๋ฑ์ ์ ์ธํด์ค๋๋ค.
model = CBOW(vocab_size, EMBEDDING_DIM)
loss_function = nn.NLLLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
#(6) ํ์ต์ ์งํํฉ๋๋ค.
for epoch in range(EPOCHS):
total_loss = 0
for context, target in data:
context_vector = make_context_vector(context, word_to_index)
log_probs = model(context_vector)
total_loss += loss_function(log_probs, torch.tensor([word_to_index[target]]))
print('epoch = ',epoch, ', loss = ',total_loss)
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
#(7) testํ๊ณ ์ถ์ ๋ฌธ์ฅ์ ๋ฝ๊ณ , test๋ฅผ ์งํํฉ๋๋ค.
test_data = ['Chang', 'Choi', 'currently', 'an']
test_vector = make_context_vector(test_data, word_to_index)
result = model(test_vector)
print(f"Input : {test_data}")
print('Prediction : ', index_to_word[torch.argmax(result[0]).item()])
"""
Skip-gram
Skip-gram์ CBOW์
์ถ๋ ฅ์ ๋ฐ๋.
1๊ฐ๋ฅผ ์ฃผ์์ ๋ 4๊ฐ์ ์์ํ์ ์ฐพ์์ผํจ
"""
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable #์๋๋ฏธ๋ถ
torch.manual_seed(1)
embedding_dim = 10
raw_text = """
Chang Choi is currently an Assistant Professor in the Department of Computer Engineering at Gachon University, Seongnam, Korea, Since 2020.
He received B.S., M.S. and Ph.D. degrees in Computer Engineering from Chosun University in 2005, 2007, and 2012, respectively.
he was a research professor at the same university.
He was awarded the academic awards from the graduate school of Chosun University in 2012.
""".split()
def make_context_vector(context, word_to_idx): #ํ
์๋ก ๋ณํ
idxs = [word_to_idx[w] for w in context]
return torch.tensor(idxs, dtype=torch.long)
vocab = set(raw_text) #๋ฆฌ์คํธํ์ > ๋์
๋๋ฆฌ ํ์์ผ๋ก ๋ง๋ฌ
vocab_size = len(vocab) #47๊ฐ ์์.
#๋์
๋๋ฆฌ ํํ๋ก ๋ง๋ค๊ธฐ
word_to_idx = {word: i for i, word in enumerate(vocab)}
idx_to_word = {i: word for i, word in enumerate(vocab)}
data = []
for i in range(2, len(raw_text) - 2):
target = [raw_text[i - 2], raw_text[i - 1],
raw_text[i + 1], raw_text[i + 2]]
context = raw_text[i]
data.append((context, target)) #data์๋ค๊ฐ ์ ์ฅ.
#ex> data = (['Chang', 'Choi', 'currently', 'an'], 'is') ...
class SkipGram(nn.Module):
def __init__(self, vocab_size, embedding_dim):
super(SkipGram, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim) #์๋ฒ ๋ฉ์ฐจ์ ์ค์
self.proj = nn.Linear(embedding_dim, 128)
self.output = nn.Linear(128, vocab_size)
def forward(self, inputs):
out = sum(self.embeddings(inputs)).view(1, -1) #๋จ์ด ์ฌ์ด์ฆ๋ฃ์ผ๋ฉด [,,,,,,,] ํ์์ผ๋ก ๋ฃ์
out = F.relu(self.proj(out))
out = self.output(out)
out = F.log_softmax(out, dim=-1)
return out
model = SkipGram(vocab_size, embedding_dim)
optimizer = optim.SGD(model.parameters(), lr=0.001)
losses = []
loss_function = nn.NLLLoss()
loss_function1 = nn.CrossEntropyLoss()
for epoch in range(100):
total_loss = 0
for context, target in data:
model.zero_grad()
input = make_context_vector(context, word_to_idx) # torchํ์์ผ๋ก ๋ง๋ค๊ธฐ > tensor[n,n,n,n]
output = model(input)
loss = loss_function(output, Variable(torch.tensor([word_to_idx[target]])))
loss.backward()
optimizer.step()
total_loss += loss.item()
losses.append(total_loss)
print(losses)
print("*************************************************************************")
context = ['Chang', 'Choi', 'currently', 'an']
context_vector = make_context_vector(context, word_to_idx)
a = model(context_vector).data.numpy()
print('Raw text: {}\n'.format(' '.join(raw_text)))
print('Test Context: {}\n'.format(context))
max_idx = np.argmax(a)
print('Prediction: {}'.format(idx_to_word[max_idx]))
https://comlini8-8.tistory.com/6
Distributed Representations of Words and Phrases and their Compositionality
Efficient_Estimation_of_Word_Representations_in_Vector_Space
https://everyday-log.tistory.com/entry/๋จ์ด-์๋ฒ ๋ฉ-2-Iteration-based-methods