Stay updated daily with trending AI research
7 days free trialPick your own topicsAutomated AI summaries

InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning and Reflection

MLLM
GUI Agents
Reasoning
Automation
Machine Learning
arXiv:2501.04575 - [arXivPDF]
40
22
5
InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning
  and Reflection
Abstract
Graphical User Interface (GUI) Agents, powered by multimodal large language models (MLLMs), have shown great potential for task automation on computing devices such as computers and mobile phones. However, existing agents face challenges in multi-step reasoning and reliance on textual annotations, limiting their effectiveness. We introduce InfiGUIAgent, an MLLM-based GUI Agent trained with a two-stage supervised fine-tuning pipeline. Stage 1 enhances fundamental skills such as GUI understanding and grounding, while Stage 2 integrates hierarchical reasoning and expectation-reflection reasoning skills using synthesized data to enable native reasoning abilities of the agents. InfiGUIAgent achieves competitive performance on several GUI benchmarks, highlighting the impact of native reasoning skills in enhancing GUI interaction for automation tasks. Resources are available at https://github.com/Reallm-Labs/InfiGUIAgent.
40
22
5
Sign up to continue reading AI summary
Stay updated on the latest trending research with our newsletter. Never miss a release date!
Sign Up
© 2025 Adaptive Plus Inc.1216 Broadway, Suite 213,575 Market Str, San Francisco, CA