SaaS
AI
Conversational UI

AI Smart Contract Analyzer

Breaking down complex DeFi transactions with step-by-step narratives

Year

2024-2025

Role

Product Strategy, End-to-End UX, UI Craft

At a Glance

SCREEN’s AI Agent explains complex DeFi transactions by turning raw smart contract data into clear, step-by-step narratives. Investigators can see what happened, which functions were exploited, and how funds moved, cutting investigation time in half.

As the founding designer of SCREEN, I led the design and integration of the AI Agent into SCREEN, owning the end-to-end UX, collaborating with engineers to embed prompts, working with investigators to collect real case questions, and iterating on outputs to balance clarity with technical accuracy.

Impact

65%

faster investigations

Problem

Smart contract exploits span dozens of contracts and internal calls. SCREEN visualized these interactions, but investigators still struggled to piece together the overall logic of an attack, often relying on in-house experts to interpret what the data really meant.

Solution

An Agentic AI experience that ingests transaction data, contract code, and documentation, then delivers plain-English explanations with step-by-step narratives, function call context, and token transfer evidence.

Context

Why Investigating Smart Contracts Is So Hard

Over the years, I kept seeing the same struggle when sitting with our clients. Even with SCREEN’s ability to analyze smart contracts and visualize internal transactions, investigators often needed one of our in-house DeFi experts to guide them through the transaction data. As attacks grew more complex, the process became overwhelming and slow, making it easy to miss how the hack actually happened.

“ A single exploit could take a couple of hours to full wrap my head around this transaction”

-DeFi Investigator

Example of a complex transaction involved in a DeFi hack.

Challenge

🌀😵‍💫

Too Much Data at Once

A single exploit could involve 20+ contracts and 80+ internal calls. To understand all the steps of the attack, investigators had to dig through overwhelming data and often missing key evidence.

📄 ➡️ 🤷🏼‍♂️

Language Barrier

Most investigators aren’t comfortable reading Solidity, Ethereum’s smart contract language. Without clear comments or prior knowledge, it’s hard to quickly understand what each function does.

Goal

Make complex transactions understandable

Break down multi-step DeFi exploits and internal flows into clear, digestible activity.

Summarize and highlight key exploits

Provide high-level narratives while pinpointing the specific functions and mechanisms attackers used.

Explain code in plain English

Translate smart contract source code and functions into clear language investigators can follow.

Design Approach

Speaking the Investigator’s Language

When we first tested SCREEN’s data with AI models, the answers were inconsistent. A vague question led to a vague answer, and non-technical investigators naturally asked in the middle of piecing together complex cases. I didn’t want to force them to “prompt like an engineer.”

So I worked with clients to collect the kinds of questions they were really asking and then mapped those into the structure of our platform. That way, users could stay natural in how they asked, while on the backend we prepared the system with the right context and data. It wasn’t about training people to prompt better, it was about shaping the agent so the right evidence, code, and transaction details were always included before the AI answered.

Example of an ideal AI output format for transaction analysis.

Iterations

Option A — Prompt Book (Build Your Own Mini-Agent)

This option built on our earlier reasoning-step design. Investigators could start with a pre-prompt we provided, add required inputs like a transaction hash or address, and save it as a reusable Prompt Book. Power users liked the flexibility, it felt like building their own mini-agent and reduced repetitive work across cases.

But this flexibility came with trade-offs. Investigators needed to learn how to tune prompts, and results varied widely depending on how each person set theirs up. Even with reasoning steps in place, some clients forgot inputs or reused prompts in ways that didn’t fit, which led to vague answers. Supporting dozens of user-authored prompt books also created higher maintenance for engineers and made it hard to guarantee consistent, compliant outputs.

Option B — Multiple Agents (Pre-Built Modes)

Here, prompts and reasoning were embedded directly into the agent. Investigators simply picked Documentation, Contract, or Transaction analysis, provided the input, and got a structured answer every time. Clients testing this version said it felt more predictable and gave them confidence the agent “knew what it was doing.” For engineers, it meant more upfront work to build separate pipelines, but easier governance, auditability, and quality control in the long run.

Option A

Option B

Option B — Multiple Agents (Pre-Built Modes)

In prototype testing, customers told us they felt more comfortable with Option B. With all the settings already embedded in the agent, they didn’t have to worry about tuning prompts. The outputs were consistent and faster to use, exactly what investigators needed in high-stakes investigations.

The Launch

First Launch, First Learnings

We launched our first beta AI Agent with select clients to gather real-world feedback. The agent was able to ingest SCREEN’s transaction data and explain what was happening, but in early sessions it often took several follow-up questions to arrive at a format that felt clear without losing critical steps. This beta phase gave us valuable signals about where to tighten the flow and how to make explanations more consistent.

Reflection

Looking Back

If I had more time, I would have tested the AI agent across a wider range of real DeFi cases to see how it handled different transaction patterns and data types. My goal would be to standardize the output format by identifying consistent patterns in how the agent explains similar cases, making the results clearer, more comparable, and easier for investigators to trust.

© 2025 Olivia Xu