Driven by blockchain technology, smart contracts, as the cornerstone of decentralized applications, have shown great potential and value in many fields such as finance and supply chain management. However, with the surge in the number of smart contracts, it is particularly important to ensure the security and reliability of their codes. Once a smart contract is deployed, its code cannot be changed, and any logical loopholes may lead to significant financial losses. Therefore, developing an efficient and accurate smart contract auditing method is crucial to protecting user assets and maintaining the health of the blockchain ecosystem. Although large language models (LLMs) have shown great potential in the field of smart contract auditing, existing technologies still face many challenges. For example, even the most advanced GPT-4 model can only achieve 30% accuracy in auditing smart contracts after combining retrieval-augmented generation (RAG) technology. This limitation mainly stems from the fact that existing LLMs are pre-trained on general text/code corpora without fine-tuning for the specific field of Solidity smart contract auditing. To address this problem, Aegis proposed the TrustLLM framework, which provides a new and intuitive approach to smart contract auditing by combining fine-tuning and LLM-based agents, and is able to generate audit results with reasonable explanations. The introduction of TrustLLM not only improves the accuracy of smart contract audits, but also brings new hope to the field of blockchain security.

The Importance and Challenges of Smart Contract Auditing

As a core component of blockchain technology, smart contracts are programs that automatically execute contract terms and ensure the transparency and immutability of transactions without the intervention of a third party. In the field of decentralized finance (DeFi), smart contracts play a particularly important role because they are responsible for processing and recording a large number of financial transactions and managing digital assets worth billions of dollars. However, since smart contracts are difficult to change once deployed, any coding errors or vulnerabilities may lead to loss of funds or other security issues, which makes the security of smart contracts an issue that cannot be ignored. With the rapid development of DeFi, the number and complexity of smart contracts are also increasing, which increases the risk of potential vulnerabilities. Once there are vulnerabilities in smart contracts, they may be maliciously exploited, resulting in theft of funds, manipulation of contracts, or other forms of loss. Therefore, it becomes crucial to conduct a thorough and precise audit of smart contracts to ensure that they can remain stable and secure in the face of various potential attacks. The purpose of smart contract auditing is to identify and fix all potential security vulnerabilities before the contract is deployed and used. This not only helps to protect the funds of investors and users, but also helps to maintain the reputation and market trust of DeFi platforms. As blockchain technology continues to mature and its application scope expands, the importance of smart contract auditing will continue to grow and become a key link in ensuring the security and healthy development of the entire DeFi ecosystem.

TrustLLM: An innovative solution for smart contract auditing

TrustLLM represents a major innovation in the field of smart contract auditing. It provides auditors with an intuitive and efficient auditing method by combining fine-tuning and agents based on large language models (LLMs). The core of this framework lies in its unique two-stage fine-tuning method, which is specifically designed and optimized for the needs of Solidity smart contract auditing. In the first stage, TrustLLM uses fine-tuning techniques to train a detector model. The purpose of this model is to identify whether there are vulnerabilities in the smart contract code. Through a large amount of training data, the detector model learns how to analyze the code and make decisions whether it is safe or not. This stage of fine-tuning is crucial because it lays the foundation for the entire audit process, allowing the model to accurately perceive potential security risks. The second stage of fine-tuning focuses on the reasoner model, whose task is to generate the reasons for the vulnerability. Once the detector model identifies a potential vulnerability, the reasoner model further analyzes the code and explains in detail why the vulnerability exists and the specific type of vulnerability. This in-depth analysis not only helps auditors understand the nature of the problem, but also provides clues to solve the problem.

TrustLLM's two-stage fine-tuning approach mimics the intuition and analysis process of human experts during the audit process. First, it performs a preliminary risk assessment through the detector model, similar to the intuitive judgment of human auditors on the code. Then, it conducts in-depth cause analysis through the reasoner model, just like an expert conducts a detailed review after discovering a problem. In addition, TrustLLM introduces two LLM-based agents - Ranker and Critic. These agents evaluate and debate multiple vulnerability causes generated by the reasoner model in an iterative manner, and finally select the most appropriate explanation. This collaborative mechanism not only improves the accuracy of audit results, but also enhances the model's ability to handle complex vulnerability scenarios.

TrustLLM's practical application effects and competitive advantages

TrustLLM's innovative framework not only improves the efficiency and accuracy of smart contract audits, but also provides auditors with deeper insights. In this way, TrustLLM is able to help audit teams identify and fix potential security vulnerabilities more effectively, thereby protecting blockchain applications from attackers. As Web3 and blockchain technology continue to advance, TrustLLM and the technology behind it will become a key tool for ensuring the security of decentralized applications. The performance of TrustLLM is compared with several existing smart contract auditing techniques, including LLMs based on prompt learning (such as GPT-4 and GPT-3.5) and other fine-tuned models (such as CodeBERT, GraphCodeBERT, CodeT5, UnixCoder). These comparisons are intended to demonstrate the advancement and effectiveness of TrustLLM in the field of smart contract auditing.

First, TrustLLM exhibits significant detection performance advantages compared with hint learning-based LLM. Although GPT-4 and GPT-3.5 are currently the most advanced language models, they do not perform as well as TrustLLM in smart contract auditing tasks. This is mainly because TrustLLM is fine-tuned specifically for the Solidity smart contract auditing domain, while existing LLMs are pre-trained on general text/code corpora. TrustLLM’s two-stage fine-tuning approach enables it to more accurately identify and account for vulnerabilities in smart contracts, whereas hint learning-based LLMs may be limited in handling domain-specific tasks. Secondly, TrustLLM also performs well compared with traditional fine-tuned models. CodeBERT, GraphCodeBERT, CodeT5 and UnixCoder are all full model fine-tuning on specific tasks, but TrustLLM surpasses these models in multiple performance indicators. For example, TrustLLM achieves higher scores in F1 score, accuracy, and precision, indicating that it is more effective in smart contract vulnerability detection. This advantage can be attributed to TrustLLM’s unique architecture, which combines detector and reasoner models and iteratively optimizes them through LLM agents, thereby improving audit accuracy and reliability. In addition, TrustLLM is designed with parameter efficiency and computational cost in mind. By using lightweight fine-tuning methods such as LoRA (Low-Rank Adaptation), TrustLLM is able to reduce resource consumption while maintaining the advantages of large models. This makes TrustLLM not only surpass existing technologies in performance, but also more feasible and scalable in practical applications. Finally, the evaluation results of TrustLLM also show its superiority in aligning with real causes. By comparing with GPT-4, the vulnerability explanations generated by TrustLLM are more consistent with the actual causes, which further proves its practicality and accuracy in smart contract auditing. In summary, TrustLLM shows significant advantages in comparison with existing technologies, whether in terms of detection performance, parameter efficiency or practical application value. These comparison results highlight the potential of TrustLLM in the field of smart contract auditing and provide new directions for future Web3 security research and applications.With the continuous development of blockchain technology, TrustLLM and similar technologies will play an increasingly important role in ensuring the security of smart contracts and promoting the development of decentralized applications.

TrustLLM Application Cases

TrustLLM's application cases are mainly focused on its smart contract audits of two undisclosed bounty projects on the Code4rena platform. Code4rena is a well-known bounty platform that aims to encourage security researchers to discover and report security vulnerabilities in blockchain projects. By working with the platform, the researchers were able to apply TrustLLM to actual smart contract audit tasks to verify its effectiveness and practicality in the real world. During the audit, TrustLLM demonstrated its strong vulnerability detection capabilities. It can not only identify known vulnerability types, but also conduct in-depth analysis of potential security risks and provide detailed explanations of the causes of vulnerabilities. The researchers used TrustLLM to conduct a comprehensive review of the smart contracts of the two projects, and found 6 critical vulnerabilities. The discovery of these vulnerabilities is of great value to the project team because they can be exploited by malicious attackers, resulting in asset losses or other security incidents. It is worth noting that the discovery of these vulnerabilities has been recognized by the project team or audit experts. This means that TrustLLM has not only been successful in technology, but also recognized by industry experts in practical applications. This achievement further proves the practicality and reliability of TrustLLM in the field of smart contract auditing. In addition, the paper mentions a special case in which a vulnerability was not discovered by any existing tools but was successfully identified by TrustLLM. This discovery was considered an important security contribution by the project team and audit experts, highlighting the innovation and foresight of TrustLLM in smart contract security auditing. Through these practical cases, TrustLLM demonstrated its potential in the field of Web3 security, especially in smart contract auditing. Its successful application not only provides a higher level of security for blockchain projects, but also provides a new direction for future smart contract auditing tools and methods. As the Web3 ecosystem continues to develop and mature, the application of TrustLLM and similar technologies will become increasingly important, providing a solid foundation for the security and stability of decentralized applications.

Aegis: The world's first independently profitable AI auditor

In today's rapidly developing Web3 ecosystem, the security audit of smart contracts has become a crucial link. In a highly anticipated smart contract audit challenge, Aegis won a high prize of 23016U with its outstanding smart contract auditing technology. This achievement undoubtedly consolidates the leading position of its research and development team in the field of smart contract security research. Aegis' success is due to its unique underlying technical architecture - TrustLLM, which is the first large-scale model built specifically for Web3 security. TrustLLM combines fine-tuning and agents based on large language models (LLMs) to provide an intuitive and insightful approach to smart contract auditing. It mimics the working methods of expert human auditors, a process that not only improves the accuracy of audits, but also provides interpretability for audit results. At the same time, Aegis's technological innovation is not limited to the TrustLLM framework. It also uses advanced RAG technology and the knowledge matching and scenario recognition principles of large models. It is trained through a structured vulnerability knowledge base and code data to simulate the thinking logic of human audit experts for intelligent auditing. This enables Aegis to efficiently and accurately detect logical vulnerabilities in smart contracts and security risks related to economic models, providing developers with valuable security protection before contract deployment. Aegis serves a wide range of people, including not only professional auditors but also developers. It supports multiple blockchain programming languages, such as Go, Rust, Solidity, and Move, covering almost all mainstream blockchain development environments. Aegis provides multi-level service solutions, from free trials to professional versions, designed to meet the needs of different users and provide a flexible and convenient user experience. The addition of Aegis not only adds a powerful AI Agent to the AgentLayer ecosystem, but also provides a safe and efficient auditing solution for the Web3 development community. With the continuous iteration and upgrade of Aegis and the experience of actual bounty challenges, it is expected to lead blockchain security auditing into a new era of intelligence and provide a solid security foundation for the development of decentralized applications.

About AgentLayer

As the first decentralized AI Agent public chain, AgentLayer promotes Agent economy and AI asset transactions on the L2 blockchain by introducing the token $AGENT. Its AgentLink protocol supports multi-agent information exchange and collaboration to achieve decentralized AI governance.

Website || Twitter || Telegram || Discord