NewsLab
Apr 29 01:43 UTC

Show HN: I'm 15 and built a cryptographic accountability layer for AI agents (github.com)

4 points|by arian_||2 comments|Read full story on github.com
i'm 15 and a sophomore in high school in california.

for the past two weeks i've been building a protocol that lets you prove what an AI agent actually did. not just log it. prove it. signed receipts before and after each action, hash chained, verifiable by anyone.

this week microsoft merged my code into their agent governance toolkit. twice.

happy to answer questions about how it works.

Comments (2)

2 shown
  1. 1. lschueller||context
    Since 52 days you repost this. Doesn't add up with the "for past two weeks" bs.
  2. 2. naishoya||context
    I dont have any questions about how it works; and seeing that you used claude to code this, I expect even if someone had those questons you could only answer with the explanations claude gave during the generation cycle.

    You are in school; use the access to claude and other LLMs to learn how code, compilers, hardware abstraction and microcircuits function on a fundamental level. These concepts will provide the foundational comprehension for larger, more complex thinking as your wet-ware matures. A this developmental stage in learning it really is all about building a strong network of 'effective study effects' within your mind.

    I see the appeal of showing off a product, many young programmers go through some flavor of this and unfortunately the current state of technology is pushing learners into value = product that solves a big problem = marketability.

    Almost the worst outcome for becoming a durable, self empowered, and creative problem solver in the long run would be 'spectacular success' right now at this specific moment.

    A slightly better, but more embarassing outcome, is when the world points out the flaws of going live with a shallow git repo, full of mit-licensed AI slop (claude's Commits 2 commits 3,791 ++ 25 --) and a website with 404 links from the git repo. It's a world facing announcement that that you vibe coded some half-baked noise and haven't taken an extra minute on the fine details.

    The reason this is the better outcome is the opportunity to learn, to turn back to the tools and extract valuable comprehension of fundamentals that are available to you in a more thorough and more readily accessible manner than has existed at any prior moment in history.

    The people who built these tools for you to use, many of them had to subscribe to a quarterly programming magazine, and hand type examples of code into a text interface and see if they could get it to run - and the mental exercises of that type brought about the abstractions and functional programming principles making progress toward modern computing possible.

    Don't rob your own mind of the weight lifting type brain strain; use these LLM's for getting your hands on the most fundamental, hard to grasp, simple-to-the-point of seeming too basic and yet somehow almost impossible to hold a clear picture of how it works, type of computing knowledge from yesteryear.

    These will be the levers and fulcrums which enable your mind to find innovative solutions to the problems that no-one else will solve.

    Or ignore me, build trivialities that waste power and your time, and reach maturity without the ability to solve problems that don't already have solutions buried in the training data, and live in anticipation of the next model update devaluing everything 'you' built.

    /<free-daily-lesson>

    p.s. Microsoft will steal anything MIT licensed and that is not any endorsement of capability or viability; it may be actually a negative symbol for many in the world of security and reliability. Ask your favorite LLM about the history of Microsoft and system security; specifically the features they are baking into OS level, unstoppable AI surveillance.