Despite their extensive use in many programming languages, LLM-based code completion models have paid little attention to functional languages like Haskell. This paper delves into improving model performance in holding Haskell functions to higher standards by training on a Haskell dataset and a new HumanEval dataset.
Developing better-trained LLMs capable of understanding and working with functional programming highlights a significant step towards inclusivity and optimization in AI-powered code assistance. The paper marks the potential of Haskell-specific datasets to empower AI’s code completion capabilities, beneficial for educational and professional settings.