Finetuning GPT2 to Reconstruct Sentences

Two words are anagrams if one can be formed by permuting the letters of the other. Applying the same logic to a sentence, would be saying that two sentences are anagrams(no such thing) if their component words can be permutated to form clones of each other. I thought it would be interesting to teach a language model to do this. You might be thinking that simply re-arranging words in a sentence doesn’t require intelligence and can be done with very trivial algorithms,you would be right, but I added an edge to this task, given a random sequence of words, the language model has to return a grammatically correct sequence using the same set of words....

June 15, 2024 · 10 min · 2047 words · Damilola John

Training BERT to identify 15 programming languages.

This is a fun side project where I explored transformers based sentiment classification for the first time by training BERT to identify 15 of the most popular programming languages. i startED with simple machine learning approaches and gradually work our way up to more complex methods till we have a satisfactory solution. The Dataset Our dataset is a csv containing 45,000 samples. The dataset is made up of two columns, the ‘code’ feature contains code snippets we want to classify and the language column, which is our label contains the programming language it belongs to....

August 19, 2023 · 4 min · 841 words · Damilola John
Playlist Generator

Playlist Generator

Playlist Generator An app that uses semantic search to generate afrobeat song playlists from input texts This is a side project where I built an end-to-end machine learning application that uses a sentence transformer model hosted as a model-as-a-service. It is a simple recommendation system that tries to find afrobeat songs about a user’s sentiment by comparing the user’s text inputs to song lyrics and then return the most similar songs....

August 11, 2023 · 1 min · 138 words · Damilola John
tokenizers

Byte-Pair Encoding, The Tokenization algorithm powering Large Language Models.

Tokenization is an umbrella term for the methods used to turn texts into chunks of words or sub-words. Tokenization has a lot of applications in computer science, from compilers to Natural Language Processing. In this article, we would be focusing on tokenizers in Language models, in particular, a method of tokenization called Byte Pair Encoding. The last few years have witnessed a revolution in NLP catalyzed mainly by the introduction of the transformers architecture in 2017 with the paper ‘Attention is all you need ’ epitomized by the introduction of ChatGPT in late 2022....

July 20, 2023 · 13 min · 2564 words · Damilola John