BERT is based on pre-training contextual representations such as semi-supervised sequence learning, generative pre-training, ELMo, and ULMFit. Unlike previous models, BERT is a deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus.
If you're ready to create Deep Art with our intuitive AI art dashboard, join the Artvy community.