Dataset 2: Recommended Language Models

Title

Dataset 2: Recommended Language Models

Files

Download 30 characters, 12-gram, tiny (4.2 MB)

Download 30 characters, 12-gram, small (39.2 MB)

Download 30 characters, 12-gram, large (398.9 MB)

Download 64k words, 3-gram, tiny (4.0 MB)

Download 64k words, 3-gram, small (39.9 MB)

Download 64k words, 3-gram, large (400.2 MB)

Description

These word models were trained with a sentence start word of < s > , a sentence end word of < /s > , and an unknown word < unk > . The word vocabulary was the most frequent 64K words in the forum dataset that were also in a list of 330K known English words. All words are in lowercase. The character models are 12-gram models and were trained using interpolated Witten-Bell smoothing. The character model vocabulary consists of the lowercase letters a-z, apostrophe, < sp > ; for a space, < s > for sentence start, and < /s > for sentence end.

The perplexities in the above table are the average per-word or per-letter perplexity averaged on four evaluation test sets. The test sets were:

The above mixture models were trained on a total of 504M words of data: 126M words of forum data, 126M words from Twitter's streaming API between December 2010 and June 2012, 126M words of forum data from ICWSM 2011 Spinn3r dataset, and 126M words of blog data from the ICWSM 2009 Spinn3r dataset.

Publication Date

2019

Keywords

language models, text mining, text analysis

Disciplines

Computer Sciences

Publisher's Statement

This data supports the paper "Mining, analyzing, and modeling text written on mobile devices," which can be accessed here on Digital Commons @ Michigan Tech.

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Dataset 2: Recommended Language Models

Share

COinS