AI helping to audit smart contracts: Web 3.0 examples and differences from Web 2.0.
Experiments show AI could help to audit smart contracts, but not yet

What is Web 3.0 and How Can AI Help?

Artificial intelligence (AI) has already revolutionized a variety of industries, from healthcare and automotive to marketing and finance. Now, its potential is being tested in one of the blockchain industry’s most crucial aspects — smart contract security. Experiments have demonstrated the value of AI-based blockchain audits, but this nascent technology still lacks some important qualities that human professionals possess — intuition, nuanced judgment, and subject expertise.

Recently, OpenZeppelin conducted a series of experiments to evaluate the potential of AI in detecting vulnerabilities. Using OpenAI’s GPT-4 model, they tested for security issues in Solidity smart contracts from the Ethernaut web game. The results were positive, as GPT-4 accurately identified vulnerabilities in 20 out of 28 challenges.

In some cases, the AI was able to provide the correct response to a simple question, such as a naming issue with the constructor function. In other cases, however, the results were mixed or poor. GPT-4 was sometimes unable to identify vulnerabilities, even when they were clearly spelled out. It even invented a vulnerability that wasn’t actually present.

Coinbase also conducted a similar experiment using ChatGPT to review token security. The AI was able to mirror manual reviews for a majority of smart contracts, but it had difficulty providing results for others. In some cases, it even labeled high-risk assets as low-risk ones.

It’s clear that AI-based vulnerability detection still has a long way to go. LLMs like GPT-4 and ChatGPT are not designed specifically for this purpose, and they lack the knowledge and patterns needed to recognize vulnerabilities. If we want more reliable solutions, we need to create machine learning models that are trained exclusively on high-quality vulnerability data sets.

For example, the OpenZeppelin AI team recently built a custom machine learning model to detect reentrancy attacks. Early evaluation results show superior performance compared to industry-leading security tools, with a false positive rate below 1%.

Striking a balance of AI and human expertise

Experiments conducted so far demonstrate that while AI models can be a useful tool in identifying security vulnerabilities, they cannot replace the nuanced judgment and domain knowledge of human security professionals. GPT-4 is mainly trained on publicly available data up until 2021, hence it is not able to recognize complex or novel vulnerabilities that are outside the scope of its training data. As blockchain technology is rapidly evolving, developers must stay up to date with the latest advances and potential vulnerabilities in the industry.

Looking ahead, the future of smart contract security will likely involve a combination of AI tools and human expertise. The most effective way to defend against AI-equipped cybercriminals is to use AI to identify the most common and well-known vulnerabilities, while human experts keep up with the latest advancements and update AI solutions accordingly. In addition to the cybersecurity domain, the combined efforts of AI and blockchain will open up many more opportunities and innovative solutions.

AI alone won’t replace humans. However, human auditors who leverage AI tools will be much more effective than auditors who ignore this emerging technology.

Categorized in:

Tagged in: