BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
about
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
description
wetenschappelijk artikel
@nl
наукова стаття, опублікована в травні 2018
@uk
name
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@da
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@en
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@nl
type
label
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@da
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@en
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@nl
prefLabel
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@da
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@en
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@nl
P2860
P1476
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
@en
P2860
P304
P407
P577
2018-05-01T00:00:00Z