In a recent study, researchers evaluated the performance of language models, specifically CodeGPT and UniXcoder, in completing code for the functional programming language Haskell Haskell Case Study. The evaluation utilized Haskell functions from a publicly accessible Haskell dataset on HuggingFace, alongside a novel HumanEval dataset. The findings revealed that the pre-training knowledge of imperative languages in Large Language Models (LLMs) does not transfer well to functional languages. However, code completion for such languages is feasible, indicating a need for high-quality Haskell datasets for model training.
This paper emphasizes the importance of incorporating diverse programming languages into AI models to enhance developer tools across various coding paradigms. Such research can lead to more inclusive and effective coding assistance for functional programming languages, fostering a more versatile software development environment.