
word2vec - issue #6
Does the vocab_size match the actual size of vocab in word2vec.c?
What steps will reproduce the problem? 1. Download attached text_simple train file 2. Compile word2vec.c as: gcc word2vec.c -o word2vec -lm -pthread 3. Run: ./word2vec -train text_simple -save-vocab vocab.txt
What is the expected output? What do you see instead?
Expect in saved vocab.txt file:
</s> 0 and 12 the 11 four 10 in 8 used 5 war 5 one 5
nine 9
What is really seen in the file
</s> 0 and 12 the 11 four 10 in 8 used 5 war 5
one 5
The last element "nine 5" wass missing.
What version of the product are you using? On what operating system? MacOS, gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)
Please provide any additional information below.
This is NOT really a bug report because I am confused to understand the format of train_file and how the vocab is constructed from it.
Based on the source code of word2vec.c, when reading from train_file, it will
insert </s> as the first element in vocab
scan each word (or </s> for newline) in train_file, add it to vocab, and hash it in vocab_hash
So far the vocab_size = the number of words in vocab, INCLUDING </s> at the head
- sort the words in vocab based on their counts, but keep </s> as the first of vocab
Now the vocab_size because the number of words in vocab, EXCLUDING the leading </s>. And if there is no newline character in train_file, </s> won't even be hashed in vocab_hash
So there is a inconsistency here between vocab_size and the actual size of vocab (including </s>). It could be a bug because later when the vocab is being iterated, it is always done by iterating the elements from 0 to vocab_size-1, like in SaveVocab(). This results in that the leading </s> will be saved, but the last element in vocab will be ignored. At least that's what it looks with a simple train file "text_simple" as attached here.
- text_simple 1020
Comment #1
Posted on Aug 25, 2013 by Quick GiraffeThe same confusion as in function CreateBinaryTree() in word2vec.c, where the array representation of tree uses vocab_size*2+1 elements, which I understand is essentially (len(vocab)-1)*2+1? That makes sense as only n-1 nodes are needed for a full binary tree with n leaf nodes?
Many thanks if someone can clarify this a little bit.
Comment #2
Posted on Aug 26, 2013 by Quick GiraffeSorry the expected output of saved vocab.txt should be
0 and 12 the 11 four 10 in 8 used 5 war 5 one 5
nine 5
It is a typo in the last line
Comment #3
Posted on Jan 27, 2014 by Swift Pandaalso looking for the reason. there are words that are missing from the trained model, which are expected to be in the vocabulary since i put the min_count = 1. (am working on CentOS)
Status: New
Labels:
Type-Defect
Priority-Medium