275 points · xerzes · 23 hours ago
github.comxerzesOP
carl_dr
I learnt a ton in the progress. I highly recommend others do the same, it’s a really fun way of spending an evening.
I will certainly be giving this MCP server a go.
VortexLain
By the way, this app had embedded the key into the shader, and it was required to actually run this shader on android device to obtain the key.
joecarpenter
I'm working on a hobby project - reverse-engineering a 30 year old game. Passing a single function disassembly + Ghidra decompiler output + external symbol definitions RAG-style to an agent with a good system prompt does wonders even with inexpensive models such as Gemini 3 Flash.
Then chain decompilation agent outputs to a coding agent, and produced code can be semi-automatically integrated into the codebase. Rinse and repeat.
Decompiled code is wrong sometimes, but for cleaned up disassembly with external symbols annotated and correct function signatures - decompiled output looks more or less like it was written by a human and not mechanically decompiled.
summarity
For objective C heavy code, I also use Hopper Disassembler (which now has a built in MCP server).
Some related academic work (full recompilation with LLMs and Ghidra): https://dl.acm.org/doi/10.1145/3728958
stared
A friend from work just used it (with Claude) to hack River Ride game (https://quesma.com/blog/ghidra-mcp-unlimited-lives/).
Inspired by the, I have it a try as well. While I have no prior experience with reverse engineering, I ported an old game from PowerPC to Apple Silicon.
First, including a few MCPs with Claude Code (including LaurieWired/GhidraMCP you forked from, and https://github.com/jtang613/GhidrAssistMCP). Yet, the agent fabricated as lot of code, instead for translating it from source.
I ended up using headless mode directly in Cursor + GPT 5.2 Codex. The results were the best.
Once I get some time, will share a write-up.
JasonADrury
This also seems to just be vibecoded garbage.
grosswait
tarasyarema
jakozaur
Actually, AI has huge potential for superhuman capabilities in reverse engineering. This is an extremely tedious job with low productivity. Currently reserved, primarily when there is no other option (e.g., malware analysis). AI can make binary analysis go mainstream for proactive audits to secure against supply-chain attacks.
xnorswap
raphaelmolly8
Curious about the hash collision rate in practice. The README mentions 154K+ entries from Diablo II patches. With that sample size, have you encountered meaningful false positives where structurally similar but semantically different functions matched? The Version Tracker comparison in the comments is fair — seems like combining this hash approach with additional heuristics (xref patterns, call graph structure) could reduce both false positives and negatives.
The headless Docker mode is a nice touch for CI integration. Being able to batch-analyze binaries and auto-propagate annotations without spinning up a GUI opens up some interesting automated diffing workflows.
Triangle9349
First impressions of the fork: everything has deviated too much from the original. look a bit sloppy in places. Everything seems overly complicated in areas where it could have been simpler.
There is an error in the release: Ghidra → File → Configure → Miscellaneous → Enable GhidraMCP. Developer not Miscellaneous.
I can't test it in antigravity there tools limit per mcp: Error: adding this instance with 110 enabled tools would exceed max limit of 100.
abhisek
It’s just easier to write code and do something specific for a task than load so many tool metadata.
I did not go past IDA. But I remember idc and IDA python. I wonder if it’s a better approach to expose a single tool to execute scripts to query what the agent needs.
deevus
rcarmo
wombat23
Last week-end I was exploring the current possibilities of automated Ghidra analysis with Codex. My first attempt derailed quickly, but after giving it the pyghidra documentation, it reliably wrote Python scripts that would alter data types etc. exactly how I wanted, but based on fixed rules.
My next goal would be to incorporate LLM decisions into the process, e.g. let the LLM come up with a guess at a meaningful function name to make it easier to read, stuff like that. I made a skill for this functionality and let Codex plough through in agentic mode. I stopped it after a while as I was not sure what it was doing, and I didn't have more time to work on it since. I would need to do some sanity checks on the ones it has already renamed.
Would be curious what workflows others have already devised? Is MCP the way to go?
Is there a place where people discuss these things?
longtermop
When an AI agent interacts with binary analysis tools, there are two injection vectors worth considering:
1. *Tool output injection* — Malicious binaries could embed prompt injection in strings/comments that get passed back to the LLM via MCP responses
2. *Indirect prompt injection via analyzed code* — Attackers could craft binaries where the decompiled output contains payloads designed to manipulate the agent
For anyone building MCP servers that process untrusted content (like binaries, web pages, or user-generated data), filtering the tool output before it reaches the model is a real gap in most setups.
(Working on this problem at Aeris PromptShield — happy to share attack patterns we've seen if useful)
chfritz
butz
hkpatel3
aetherspawn
IDA work(ed) fine but I misplaced my license somewhere.
bbayles
mrlnstk
underlines
kfk
rustyhancock
poly2it
DonHopkins
What parts of Ghidra (like cross referencing, translating, interpreting text and code) can be "uplifted" and inlined into skills that run inside the LLM completion call on a large context window without doing token IO and glacially slow and frequently repeated remote procedure calls to external MCP servers?
https://news.ycombinator.com/item?id=46878126
>There's a fundamental architectural difference being missed here: MCP operates BETWEEN LLM complete calls, while skills operate DURING them. Every MCP tool call requires a full round-trip — generation stops, wait for external tool, start a new complete call with the result. N tool calls = N round-trips. Skills work differently. Once loaded into context, the LLM can iterate, recurse, compose, and run multiple agents all within a single generation. No stopping. No serialization.
>Skills can be MASSIVELY more efficient and powerful than MCP, if designed and used right. [...]
Leela MOOLLM Demo Transcript: https://github.com/SimHacker/moollm/blob/main/designs/LEELA-...
>I call this "speed of light" as opposed to "carrier pigeon". In my experiments I ran 33 game turns with 10 characters playing Fluxx — dialogue, game mechanics, emotional reactions — in a single context window and completion call. Try that with MCP and you're making hundreds of round-trips, each suffering from token quantization, noise, and cost. Skills can compose and iterate at the speed of light without any detokenization/tokenization cost and distortion, while MCP forces serialization and waiting for carrier pigeons.
speed-of-light skill: https://github.com/SimHacker/moollm/tree/main/skills/speed-o...
More: Speed of Light -vs- Carrier Pigeon (an allegory for Skills -vs- MCP):
https://github.com/SimHacker/moollm/blob/main/designs/SPEED-...
randomtoast
clint
Specifically the dynamic analysis skills could get a really big boost with this MCP server, I also wonder if this MCP server could be rephrased into a pure skill and not come with all the context baggage.
I built this because reverse engineering software across multiple versions is painful. You spend hours annotating functions in version 1.07, then version 1.08 drops and every address has shifted — all your work invisible.
The core idea is a normalized function hashing system. It hashes functions by their logical structure — mnemonics, operand categories, control flow — not raw bytes or absolute addresses. When a binary is recompiled or rebased, the same function produces the same hash. All your documentation (names, types, comments) transfers automatically.
Beyond that, it's a full MCP bridge with 110 tools for Ghidra: decompilation, disassembly, cross-referencing, annotation, batch analysis, and headless/Docker deployment. It integrates with Claude, Claude Code, or any MCP-compliant client.
For context, the most popular Ghidra MCP server (LaurieWired's, 7K+ stars) has about 15 tools. This started as a fork of that project but grew into 28,600 lines of substantially different code.
Architecture:
I validated the hashing against Diablo II — dozens of patch versions, each rebuilding DLLs at different base addresses. The hash registry holds 154K+ entries, and I can propagate 1,300+ function annotations from one version to the next automatically.The headless mode runs in Docker (docker compose up) for batch processing and CI integration — no GUI required.
v2.0.0 adds localhost-only binding (security), configurable timeouts, label deletion tools, and .env-based configuration.
Happy to discuss the hashing approach, MCP protocol design decisions, or how this fits into modern RE workflows.