ANNE: Distributed Data Network Explained
ANNE Library
Community Insights
What is a distributed data network, and how does it differ from the centralized services we use every day? The following insights explore the architecture, economics, and philosophy behind networks where data lives on user-owned hardware rather than corporate servers.
From the distinction between decentralization and distribution to the specific protocols that enable peer-to-peer file transfer and semantic querying, these insights provide a comprehensive introduction to the distributed data network model and its real-world implementation in ANNE.

My data is mine, right?
You type a question into a search bar. You scroll through a social media feed. You ask an AI for help. Feels private, right? Just you and the screen?
It’s not. Every click, every like, every half-formed thought you type into a big tech platform is quietly copied, packed away, and sold to the highest bidder. Your personal data is the digital fingerprint of your life and has become the product. You are the product.
These platforms aren’t just hosting your conversations; they’re mining them. They build intricate profiles to predict your behavior, manipulate your emotions, and sell your attention to advertisers. In exchange for “free” service, you hand over the very thing that defines you. And the worst part? You have no control. Your account, your connections, your voice – they can vanish tomorrow if an algorithm decides you’ve stepped out of line.
That’s the deal we’ve all silently accepted. But what if you could opt out?
With ANNE, the rules change. Your data stays on your hardware. On your machine, under your control. You choose what to share, with who, and for how long. No central servers, no data-mining middlemen, no faceless algorithm deciding your fate. Just direct, sovereign ownership of your digital life.
So, is your data really yours? On ANNE, the answer is finally yes – because a distributed data network has no central point of collection to mine or monetize it.

Decentralization: what it really means…
Decentralization is a spectrum of effort during which a system may contain multiple single points of failure. The highest level of decentralization is reached when no single points of failure exist. This guarantees the continuity and unstoppable nature of the system.
A system of two is decentralized when each unit is capable of fulfilling the same function, in case one of the parts fails. It is called redundancy of function. The more parts exist in such a system, the greater the degree of decentralization by function.
However, a decentralized system can be undermined by external factors. For example, if a single entity manages all units of a decentralized network, the entire system’s effectiveness relies on that operator’s ability to ensure continuous operation. Further, a system may depend on centralized components, such as the domain naming system or third-party hosting services, which can pose risks to its overall functionality.
Such a system, while decentralized by function, is centralized in service. If access can be denied or severely affected by terminating the operator or several operators, or struck down at the level of the third-party intermediary, calling it decentralized is a misleading illusion at best.
The truth of the matter is that a system can ever truly be decentralized if all the following conditions are met:
- No single points of failure exist within the system’s architecture, ensuring that the failure or removal of any individual component does not compromise the overall functionality.
- Redundancy of function is implemented across multiple independent units, where each unit can autonomously perform the same critical tasks without reliance on others.
- Control and operation are distributed among diverse, independent operators or entities, preventing any single operator or coordinated group from being a vulnerability that could halt the system.
- The system avoids dependence on centralized external services or intermediaries (such as domain naming systems, third-party hosting, or proprietary infrastructure) that could be targeted, censored, or disrupted.
- The system’s design ensures resilience against external interventions, including legal, technical, or coercive actions that might attempt to terminate access, operators, or components, thereby guaranteeing unstoppability and continuity under adversarial conditions.
Meeting these conditions requires more than just decentralization of control; it requires a fundamentally different architectural foundation: the distributed data network…

Distributed Data Network Explained: ANNE’s Architecture of Sovereignty…
A distributed data network is an infrastructure where data storage, processing, and retrieval are spread across independently operated nodes with no central coordination. Every node that joins the network contributes to the whole. The network does not rely on any single server, data center, or provider. It is a mesh of peers, each holding and serving data while participating in the collective operation of the system.
Decentralization vs. Distribution: A Critical Distinction
Decentralization and distribution are terms often used interchangeably, but they describe fundamentally different things. Understanding the distinction is key to understanding how modern peer-to-peer systems actually work.
Decentralization is about control. A decentralized system is one where no single entity has ultimate authority. Decisions are made collectively, power is diffused, and there is no central point of failure or censorship. Monero is decentralized because no government or corporation controls it. Your Monero wallet is decentralized because only you hold the keys. Decentralization answers the question: who is in charge?
Distribution is about structure. A distributed system is one where components located on networked computers communicate and coordinate their actions by passing messages. The workload and data are spread across multiple nodes that operate concurrently. For example, Monero network is distributed and decentralized. In contrast, content delivery networks like Cloudflare are distributed – they serve the same content from many locations. But they remain under centralized control. Distribution answers the question: how is the system built?
A system can be decentralized without being distributed. A small community running a shared server with collective decision-making is decentralized in governance but centralized in infrastructure. Conversely, a system can be distributed without being decentralized. Cloud services like Google Docs are highly distributed across data centers but remain under centralized corporate control.
In a true distributed data network, there is no master copy, no primary node, no hierarchy. Every participant runs the same software, follows the same protocols, and contributes equally to the network’s resilience and capacity. This flat architecture is what distinguishes distributed systems from merely decentralized ones.
Core Characteristics of a Distributed Data Network
1. Full Data Replication
Data is not stored in one place. It is replicated across all participating nodes. Every node holds a complete copy of the shared dataset. When you query your local node, you are querying the same complete dataset that exists everywhere else. This is distribution at the data layer: the complete state of the network is present on every participant’s hardware.
2. Distributed Consensus
The network does not rely on any central coordinator to determine what data is valid. Nodes independently validate incoming information against the same set of rules. They gossip among themselves, each building its own view of the shared state and converging on the same canonical history through protocol-defined mechanisms. Every decision about the state of the network is made collectively by all participating nodes.
3. Local Query Processing
Nodes do not fetch data from remote servers. They build queryable structures locally from the replicated dataset. When you search or traverse relationships, you are operating on data stored in your node’s database or memory, not sending queries to a centralized database. Each node constructs and maintains its own identical copy for local access.
4. Peer-to-Peer Data Transfer
When files or large payloads need to move across the network, they do so directly between peers. There are no central trackers, no single sources, no bottlenecks. Data is split into chunks and retrieved from multiple peers in parallel. Each chunk is verified independently. If a peer goes offline, others fill the gap. The data exists across the network, distributed among nodes that have opted to store or cache it.
5. Distributed Application Layer
Applications do not communicate with central API servers. They send requests that propagate through the network to nodes that provide the relevant data. Responses travel back along the accumulated route. There is no load balancer, no application gateway, no single point of control. The application layer itself is distributed, with each provider node handling requests independently.
Why Distribution Matters
Resilience Through Redundancy. If any single node goes offline, the network continues unaffected. There are no servers to restart, no databases to restore. Thousands of other nodes hold the same data and can answer the same queries. The only way to stop the network is to shut down every node, a practical impossibility given global distribution.
Sovereignty Through Local Control. Your data lives on your hardware. When you query the network, you are querying your copy. When you serve data, you are serving from your disk. No remote service can revoke your access, throttle your queries, or disappear with your information. The distributed network is a commons you participate in, not a service you consume.
Scalability Through Participation. In a client-server model, scaling means provisioning more servers. Value caling happens automatically as more nodes join. Each new participant adds storage capacity, query throughput, and distribution bandwidth to the collective. The network grows stronger with every node.
Neutrality Through Protocol. No single entity controls what data is stored or who can access it. The rules are encoded in the protocol, not enforced by a platform. If your data conforms to the network’s standards, it becomes part of the shared dataset. If you opt into a particular data type, you can request its payloads. There is no corporate policy to appeal, no terms of service to violate, no central authority to petition.
ANNE: A Real-World Distributed Data Network
ANNE is a complete example of a distributed data network built on these principles. Its architecture reflects each layer described though these Insights below and ANNE Library documentation linked at the top of this web page.
- The Proof of space time datachain is fully replicated on every ANNE Node, providing the foundation for distributed consensus.
- The neuromorphic hypergraph is built locally from the datachain, enabling distributed query processing.
- ANTOR enables peer-to-peer file distribution across the network.
- The Alt Data Network provides distributed request-response for application payloads.
In ANNE, distribution is not a feature added on top. It is the foundational layer everything else builds upon. When you run an ANNE Node, you are not connecting to the network. You are becoming the network. Your machine joins thousands of others in forming a distributed data network where data lives everywhere and control over your data remains with you.

ANNE in a nutshell: What makes it different?
ANNE is a new layer one distributed platform that serves as a Distributed Open-Source Intelligence (DOSINT) resource, allowing people to share and distribute information directly between each other, without relying on central servers. Think of it as a network where everyone can contribute to a collective pool of knowledge about various topics, such as people, events, places, and even documents or web content. This platform uses a semantic protocol called the 1Schema data protocol to connect everyone.
Onchain/off-chain documents or files can be shared between participating peers via the Antor protocol, akin to the BitTorrent protocol, enabling users to distribute encrypted files over the ANNE Network in a decentralized manner. This means that when you search for information or download files, you do so through your own “annode”, locally. This way, you won’t have to worry about privacy concerns or censorship, since the public data isn’t stored on third-party servers but replicated through the data network.
Every time someone shares new information through their annode, all other users in the network receive that information. This makes the entire data network grow smarter and more connected over time.
ANNE is designed to empower developers and users alike to collaborate on creating shared resources of public information. It’s immensely useful for researchers, journalists, and anyone seeking to uncover connections otherwise unseen.
ANNE combines a peer-to-peer cash system with a peer-to-peer data system, enabling direct queries through local annode without relying on third-party data intermediaries.

Who is behind ANNE?
Mr Scatman is the inventor of ANNE, the ANNE network, 1schema protocol, ANTOR protocol and the alt-data-network.
Before ANNE was a native Layer 1, early iterations of ANNE were used in real world projects. The initial spark of inspiration came from work at an AGI startup from 2006-2010. US Patents were granted during that time but they are no longer applicable. A novelty gematria application was made while lead dev Mr Scatman was under non-compete with the AGI startup. Later uses of the neuromorphic hypergraph were applied in the healthcare data space in business intelligence support and even a novelty movie recommendation site.
It was from this use of ANNE + blockchain that it became clear P2P Cash chains were not the right fit that could maximize the neurmorphic hypergraph. ANNE needed… ANNE. A P2P Semantic Data System.
Follow Mr Scatman on X at @MRScatman_dev and @ANNE_p2p

ANNE Media’s role in the ecosystem…
A bunch of self-licensed cypherpunks obsessed with the true progress of humanity through decentralization of all things, in lieu of the present hierarchy from the centre to the periphery. ANNE Media is loosely affiliated with ANNE as we are committed to realizing its vision by delivering state-of-the-art, fully decentralized and distributed applications that integrate Monero and ANNE, which you can operate as a sovereign individual from home.
This ANNE Media website serves as a complementary info portal to ANNE Network and a bridge between the centralized world and the emerging sovereign world of ANNE. Our efforts are aimed at lowering the barrier of entry into ANNE’s ecosystem by delivering cross-platform installation suite – ANNE Wizard, and associated applications – ANNE Hasher, ANNE Miner. We bring to you ANNE Talk public forum for all things ANNE, followed by a new Kuno platform that will serve as a pilot and a blueprint for distributed applications of its kind. Further down the road, we have a peer-to-peer social network and music streaming service under construction.
Follow @r_a_d_a_n_n_e and @annemedia_web on X to keep tap on the latest.

Why should I care? What’s in it for me?
ANNE is an exciting opportunity that allows you to have a real impact and work together with others to create positive change. Just like cryptocurrencies have empowered us to send and receive money freely without needing banks, ANNE empowers us take control of our data and the applications we use.
In a world where web applications run directly on your personal computer, giving you ownership of your data and the applications themselves, you’ll choose to reveal your identity when desired and only when desired. This is the essence of privacy.
With ANNE, you can communicate directly and securely with other users and soon with anyone around the world. This means you can chat, make calls, or share files while keeping your conversations private and free from outside control.
As a future potential, if you’re interested, you can earn money by allowing certain advertisers to connect with you through permissionless and interactive peer-to-peer ads.
You can also participate in creating a community knowledge base. This allows you to share information, like linking names with events or locations, and help build personalized maps of data that grow with everyone’s contributions. By contributing useful information, you can earn rewards in annecoin.
If you’re a researcher, ANNE offers a way to find connections among people, places, or topics without having to sift through endless unrelated information. This can make your research more effective and insightful.
If you’re a developer, you can make a difference by building unstoppable applications, using shared data resources and a vast arsenal of API functions and ANNE libraries that are at your disposal.
You can even earn annecoin by using spare space on your hard drive or by collaborating with ANNE Media or ANNE Network if you have skills to contribute.
And much more… In short, ANNE is about being part of an exciting journey where you can learn, contribute, and make a difference. Collaboration with mutual benefits is the very essence of ANNE’s distributed data network.

ANNODE: your personal server
An ANNODE is the software that turns a computer into a sovereign node in the ANNE network. It runs on ordinary hardware: a Raspberry Pi, an old laptop, a home server, or a cheap VPS.
Once installed, your ANNODE holds the complete replica of ANNE datachain, every block and every relon ever broadcast. From that local chain, it builds the neuromorphic hypergraph, participates in the peer-to-peer mesh, forwarding Alt Data requests for subnetworks you’ve opted into, serving files via ANTOR, routing encrypted messages, and optionally mining new blocks. A graphical interface provides a wallet, peer controls, and an encrypted messenger for direct node-to-node communication.
Running an ANNODE makes you a peer, not a client. The network has no servers, only nodes. Yours holds the same data as everyone else’s and answers to no one but you. What you get out of it depends on what you and others put in: more capable hardware means faster queries and better mining performance, but the baseline participation is open to anyone.

Installing ANNE: A step-by-step guide…
Becoming part of ANNE network is as simple as installing any desktop app. We support Linux (dnf, apt, pacman), macOS 10.12+, and Windows 10/11. The ANNE Wizard automates everything – dependencies, setup, and configuration.
Steps:
- Download the ANNE Wizard installer for your Operating System.
- Run the installer: It will ask you to installs dependencies if needed (Java, MariaDB).
- Database setup: Configures MariaDB, creates annedb, provides root password, auto-downloads and imports a recent annechain snapshot.
- Install core tools – Annode, ANNE Hasher, and ANNE Miner.
- Configuration: Guide prompts for networking, ANNE keys generation, and auto-config of node properties and miner configuration
- Automatically opens peer-to-peer port in your OS firewall.
- Finish: Adds desktop shortcuts and menu integrations.
- Start annode and let it catch up with the network
If you’re the type who likes to know what’s under the hood, or just wants to understand the machinery better, follow our manual guide.
Note: Annode is a peer-to-peer application. For optimal connectivity, you may need to open ports on your router. Our ANNE Wizard will guide you through this if required. For further help, visit the ANNE Forum.

ANNE Cloud: what emerges when we all run nodes…
The ANNE Cloud is not a thing you run. It is what emerges when thousands of people run and use ANNODEs, the world manifestation of a distributed data network in its most literal form.
Each ANNODE contributes storage, bandwidth, and computation from personal hardware scattered across the globe. Together, they form a self-organizing infrastructure with no central coordination, no single point of failure, and no dependency on corporate data centers. The cloud is the collective sum of all participating nodes.
This inverts the traditional model. In the old cloud, you sent your data to servers owned by others. In the ANNE Cloud, the public datachain is replicated across every node; your copy lives on your hardware alongside everyone else’s. Private data stays under your control, shared only with peers you authorize. Your ANNODE holds the complete chain locally, serves the hypergraph from memory, and fetches files or alt data from peers when needed. Applications talk to your local node, which negotiates the distributed network on your behalf.
The ANNE Cloud is not a metaphor. It is the literal aggregation of every ANNODE running worldwide, providing the same services as centralized platforms but owned by no one yet powered by everyone who participates.

Which apps can I use right now with ANNE?
ANNE ecosystem delivers a comprehensive suite of user-friendly applications designed to enhance your experience with distributed data networking, wallet management, and community engagement. These tools are accessible once your annode is installed and running, with many served directly from your local instance for privacy and convenience. Below is an overview of the key applications and features:
Core Annode Features and Tools:
- Annode: A intuitive node graphical user interface for monitoring network and performance, and accessing integrated tools.
- Advanced Peer Management Tools: Utilities to connect, manage, and optimize peer-to-peer interactions within the ANNE network.
- Wallet: A secure, built-in wallet for handling annecoin transactions and storage.
- Cables: An end-to-end (E2E) encrypted messenger for anonymous annode-to-annode (A2A) communication.
Installation and Utility Suite brought to you by ANNE Media
- ANNE Wizard: A professional-grade installation suite that guides you through setup with ease, ensuring a smooth onboarding process.
- ANNE Hasher: A pre-mining tool for generating secure hashes filling up unused space on your hard drive with pre-computed mathematical solutions ANNE Miner will search.
- ANNE Miner: Low-energy consumption mining software dedicated to earn annecoin by participating in the network’s consensus and validation processes.
Community and Collaboration
- ANNE Talk: A public forum for discussing all things ANNE, including tips, updates, and community-driven support. Join conversations, ask questions, and share ideas with fellow users.
Localhost Applications
Once your annode is up and running, you’ll have access to a main suite of web-based apps at
http://localhost:9116/ANNE.html. These include:
- Multi-Account Web Wallet: Manage multiple annecoin accounts securely from one interface, with features for transfers, balances, and transaction history.
- Annecoin Swaps: Easily exchange annecoin with other assets within the ecosystem.
- Annex App: A lookup tool for searching persons, companies, locations, dates, and numbers. You can sponsor “neurons” or provide ratings via the unique FEELZ/OPINIONZ closed-loop feedback system. This helps improve data quality and relevance. For ways to earn annecoin through participation, see “How can I earn annecoin?” in this FAQ.
- LUKAT Conceptual Search: An innovative search engine that allows you to explore conceptual connections, discover hidden relationships, and uncover insights across the ANNE network.
Gaming and Mining Experiences
Numiner: A Layer One, 100% payout nume mining competition integrated into the ANNE ecosystem – access it at http://localhost:9116/aon.htm
- Objective: Mine “Numes” by matching numbers from your mining entry to the block ID that processes it. Collect and combine Numes to form a “Numestone,” earning 50% of the Numestone fund – 1 BILLION annecoins awaits the first creator!
- Rewards: Each mined Nume comes with an annecoin payout. There’s even a consolation reward for matching just two numbers.
- Nume Types and Progression: Advance through mini, mid, big, or full Numes, climb the Numiner ranks, and trade your Numes on the built-in Numes Market.
- Advanced Play: Lock matching Numes to create a Numestone for bigger rewards.
- For full rules, strategies, and updates, visit the dedicated Numiner page within the app.
If you’re new, start with the ANNE Wizard for installation, then explore the localhost suite to get the most out of ANNE. If you have questions about anything ANNE, let’s talk at ANNE Talk.

1Schema: the data language of ANNE…
1Schema is the data language of the ANNE network. Every piece of information stored on the network must conform to a single, fixed structure. This guarantees that all data, from any application, is inherently compatible and queryable. Everything speaks the same language.
Most data today lives in corporate silos or proprietary systems, locked inside formats that only specific applications can read. 1Schema takes a different approach. Information is stored not as opaque blobs, but as networks of interconnected concepts called neurons and the relationships between them called relons.
This turns the ledger into a living knowledge graph where meaning is explicit, not trapped inside corporate databanks. A person can look at a neuron and understand what it represents. A machine can traverse the connections and reason about what it finds. The same data serves both.

Neurons and relons: how ANNE structures knowledge…
Neurons are the basic units of meaning in the ANNE network. They represent anything that can be named or referenced: a person, a place, an organization, a color, a number, an abstract concept. Each neuron has a unique identifier and exists as a node in a vast, interconnected graph.
Relons are the connections between neurons. Every relon is a semantic triplet: FROM → RELN → TO. In plain terms, it says that one thing relates to another thing in a specific way. Alexander → born → Pella. Paris → capital → France. water → freezes → 0 degrees Celsius. The FROM is the subject, the RELN is the relationship, and the TO is the object.
This triplet structure is the only format the network accepts. Complex ideas are built by composing many triplets, each one a small, unambiguous piece of meaning. A biography of Alexander is not a block of text; it is thousands of triplets connecting him to places, people, events, and dates. A product catalog is triplets linking items to categories, prices, and sellers. A discussion forum is triplets connecting users to posts, posts to topics, and reactions to posts.
This design turns the datachain into something no ordinary database can match: a dense, traversable graph where every piece of information is explicitly connected to every other piece. Meaning is not hidden in text strings or locked in application logic. It is discover-able, machine-readable, and human-understandable, and propagated throughout the network, all at once.

Early Concepts: the building blocks of meaning…
Early Concepts are the starter set of neurons that bootstrapped the network at genesis. They are the primitive building blocks. Think of them as the alphabet before you write words and compose sentences.
ECs cover the most basic categories of thought and experience: primitive classes like thing, action, and state; semantic roles like actor, affected thing, and instrument; meta-relations like cause and effect; qualities like color and size; feelings like joy and anger; and spatial concepts like in, on, past, and future.
But ECs are not just a list. They come pre-wired with relationships. actor is linked to living. fast is linked as the opposite of slow. cause points to effect in a temporal chain. This pre-wiring encodes a vast amount of common-sense knowledge directly into the substrate. When you later define a new concept, you anchor it to these primitives, and it inherits their meaning. The system doesn’t need to learn what “causality” is from scratch; it already knows.

The neuromorphic hypergraph: ANNE’s living knowledge structure…
The neuromorphic hypergraph is the living knowledge structure that emerges from all the data stored on ANNE. Every neuron and every relon ever broadcast becomes part of this vast, interconnected web of meaning.
A traditional database stores information in tables with rows and columns, or in documents with labeled fields. You retrieve what you put in, and the relationships between things must be rebuilt every time you query. The hypergraph works differently. Information is stored as connections from the start. A person is not just a row with a name and birth date. They are a node linked to their birthplace, their occupation, their friends, their interests. A place is not just coordinates. It is a node linked to its history, its visitors, the events that happened there.
This structure lives on every ANNODE. When you run a query, your local node doesn’t scan tables or parse documents. It traverses connections, following paths from one neuron to another. The result is not a flat list of matching records but a rich neighborhood of meaning, pulled from the graph in milliseconds.
Every new relon adds another thread to this fabric. The hypergraph grows denser and more valuable over time, not because someone designs it that way, but because every contribution naturally weaves itself into what already exists.

1Schema vs the hypergraph: what’s the difference?
1Schema is the protocol. The hypergraph is the accumulated result of everyone using it, a living structure that gets smarter the more people contribute.
1Schema is the fixed, Layer 1 enforced rulebook for how data must be structured. It says: every piece of information must be a relon, a 6-tuple with specific fields. It defines the dimensions (TYPE codes) like BE, HAS, AWARENESS. It specifies that neurons exist only when referenced. It never changes. Think of 1Schema as the grammar and vocabulary of a language.
The neuromorphic hypergraph is the living knowledge structure built from every relon ever broadcast. It is the sum total of all neurons and all their connections, stored in memory on every ANNODE and optimized for traversal. It grows denser over time as more relons are added, weaving an increasingly rich web of meaning. Think of the hypergraph as everything ever written in that language: all the books, all the conversations, all the connections between ideas.
When you broadcast a relon saying Alexander → was born in → Pella, 1Schema ensures it is formatted correctly. The hypergraph then integrates that relon, adding Alexander and Pella as neurons (if they didn’t exist) and creating a new connection between them. Later, when someone queries for things connected to Alexander, the hypergraph traverses that connection and returns it alongside every other relon that touches him.
The hypergraph also exhibits emergent properties that 1Schema does not prescribe. The R-factor, average connections per neuron, increases over time. Dense clusters form around frequently referenced concepts. Patterns emerge that were never explicitly programmed. None of this is in the 1Schema rules. It is a consequence of many participants in the distributed data network following those rules over time.

Hyper and neuromorphic: what the names mean…
“Hyper” refers to a hypergraph, which can capture relationships that simple graphs cannot. In a regular graph, every connection links exactly two things. But real-world knowledge is messier. An event might involve dozens of people, a place, and a time, all connected simultaneously. The hypergraph handles this through “clumps,” which bundle multiple relons together into a single semantic unit. Think of a clump as a mini-graph that represents one complex fact.
“Neuromorphic” means brain-like. The design takes inspiration from how biological neural networks organize information. In the brain, meaning doesn’t live in single neurons. It emerges from patterns of connections between them. The hypergraph works the same way. A neuron has no meaning in isolation. Its meaning comes from all the relons that connect it to other neurons. The more connections, the richer the meaning.
As the network grows, the average number of connections per neuron increases, enriching the distributed data network’s capacity for pattern recognition and inference. This is analogous to synaptic density in a brain. Higher density enables more sophisticated pattern recognition and inference. The system can discover latent connections that were never explicitly programmed, simply by traversing the growing web of relationships. The graph is hyper in its structure and neuromorphic in its behavior.
The hypergraph is not itself an artificial intelligence. But it is the kind of structure an intelligence would need to grow up in. Like a developing brain that builds itself through experience, the hypergraph provides the associative memory and contextual grounding that any system, biological or artificial, requires to actually understand the world rather than just store facts about it.

Growing the vocabulary: how new data types emerge…
The 1Schema protocol is set-in-stone. It defines the structure and validation rules for relons, the indexing of neurons, and the core dimensions (BE, HAS, AWARENESS, etc.) that organize meaning. But it does not prescribe what neurons can exist or what relationships can be asserted between them. This means new types of data emerge naturally through ordinary use, no upgrades required.
Say you want to create a new category called “vinyl record.” You broadcast a relon stating that vinyl_record BE physical_object. That’s it. From that moment forward, vinyl_record exists as a recognized thing in the network. Anyone can use it.
Next you want to define properties that records have. You broadcast PARAM relons from vinyl_record to concepts like release_year, label, genre. Those properties now exist. Anyone creating a relon about a specific record can attach values to them. A collector builds an app to catalog their collection. A record store builds an app to list inventory. Both apps read from and write to the same set of concepts because they’re all using the same underlying language.
No protocol vote. No permission from anyone. No waiting for an upgrade. The distributed data network grows because participants define what they need, when they need it, and others build on top of those definitions. The vocabulary of the network expands with every new concept its users invent.

Beyond LLMs: 1Schema as a foundation for real intelligence…
AI LLM models are correlation engines. They process vast amounts of text and learn statistical patterns. Ask them a question, and they generate a response that sounds plausible based on what they’ve seen. But they don’t understand what they’re saying. They have no internal model of the world, no sense of cause and effect, no grounded meaning.
This is because they were trained on the web as it exists: a collection of documents hosted on centralized servers, disconnected from any underlying semantic structure. They learn what words tend to follow other words, but they never participate in a distributed data network where meaning is explicit, where relationships between concepts are encoded as first-class citizens, and where information is interconnected rather than siloed.
1Schema offers a different foundation. Its knowledge graph is built on explicit relationships between concepts that are themselves grounded in primitive experiences. red is connected to color and sense_sight. pain is connected to feelz and has a negative polarity. An intelligence that operates on this substrate doesn’t just pattern-match the word “pain”; it can traverse the connections and understand that pain is something living things experience, that it has valence, that it is linked to avoidance behavior.
This opens the door to actual reasoning. Rules can be encoded as relons themselves. If the graph contains Socrates BE human and human BE mortal, a simple traversal produces Socrates BE mortal. That’s not statistical prediction; that’s deduction. The distinction between semantic memory (facts) and episodic memory (experiences) is built into the protocol, mirroring how human memory works.
ANNE is not an AGI. It is the kind of foundational structure an AGI would need to grow up in. It provides the grammar, the vocabulary, and the common-sense ground truth that any intelligence, artificial or otherwise, requires to actually understand the world rather than just predict the next word. And because ANNE itself is a distributed data network built on collective infrastructure, understanding can emerge from collective participation, not from corporate-controlled training data.

Collaboration without intermediaries: how ANNE changes the game…
On the traditional web, your data lives in corporate silos. When you use a running app, that company owns your pace, your routes, your workout history. When you use a meal planner, another company owns what you eat. Each silo builds a profile of you, often linked to your real identity, and uses it to target ads or sells it to data brokers. You trade your privacy for convenience, and the companies profit.
ANNE flips that model. Data lives on the public datachain, not in any company’s servers. When a developer builds a hiking app, they don’t build an isolated database behind it. They build an interface to the public knowledge graph that already exists. Trail conditions, route popularity, seasonal weather patterns, user reviews. All of this is public data, contributed by users pseudonymously, available to any application that wants to surface it.
Your private data never enters the picture unless you explicitly choose to broadcast something. Even then, it is attached to your pseudonymous identity, not your real name. No company sits in the middle collecting your information. No algorithm profiles you to sell your attention. The data you contribute becomes part of a shared resource that benefits everyone, but no single entity controls it or profits from it exclusively.
This model also opens the door to something entirely new: peer-to-peer advertising. If you choose to expose certain interests or behaviors tied to your pseudonymous identity, advertisers can reach you directly through the network. You opt in, you set the terms, and you get paid directly for receiving interactive ads via smart contracts. No Google sitting in the middle charging hefty fees. No data brokers reselling your profile without your knowledge. You decide what exposure is worth, and advertisers pay you for the privilege of your attention.
If you report that a trail is muddy, that fact is now in the public knowledge graph. Any hiker using any related app can see it. The app developer doesn’t own that data. They can’t sell it. They can’t lock it behind a subscription. They just provide a window into the graph. If you don’t like their interface, you switch to another app and take all your data with you, because your data was never theirs to begin with.
This is the essence of distributed data network, a collaboration without intermediaries. Applications compete on experience, not on hoarding your information. The network grows more valuable with every contribution, and you remain exactly as visible or as private as you choose to be.

Turning knowledge into currency: the 1Schema economy…
In 1Schema, every neuron and relon has economic dynamics built into Layer 1.
When you create a neuron, you can become its sponsor. Sponsorship is not ownership in the cryptographic sense, but an economic position. Every time that neuron is referenced in any future relon, a protocol-enforced payment flows to the sponsor. This creates a direct financial incentive to build or acquire high-quality, useful concepts that others will want to connect to.
Sponsorship is transferable through a competitive, algorithmic process. Anyone can acquire sponsorship of a concept by paying a price determined by the protocol. The existing sponsor receives a payout for their contribution, and the new sponsor assumes the ongoing payment stream from future references. This creates a liquid market for shared concepts, where value is determined algorithmically based on usage and demand.
Keyed neurons function differently. They are controlled via public-private key pairs, like registered assets. Only the key holder can modify them. Their value derives from both their position in the graph and the exclusivity of control.
Both classes participate in an economy native to ANNE network where value flows continuously to those who build and maintain the semantic infrastructure others depend on.

Mining with hard drives: Proof of Space Time explained…
PoST is a consensus protocol that uses hard drive space as a sybil-resistant resource. Miners first run a one-time plotting process that precomputes cryptographic hashes using their account identifier and writes them to disk. Plotting is computationally intensive. Once complete, the drive contains many nonces, each holding precomputed data.
For each block, the network generates a challenge from the previous block’s signature. The miner reads a specific 64-byte scoop from each nonce and computes a deadline. It’s the number of seconds that must elapse before a miner may forge a block. The miner with the lowest valid deadline forges the next block.
The protocol’s security rests on this asymmetry: plotting cannot be done on demand, so any influence requires committing storage space in advance. That is the “space” component. The deadline itself provides the “time” component, proving that time has passed relative to the previous block.
While Proof of Work requires continuous electricity consumption, with mining power proportional to hashrate and Proof of Stake requires locking up coins, with influence proportional to wealth, PoST requires neither. Mining probability is proportional to committed disk space. The one-time plotting phase is computationally intensive, but ongoing mining only reads from disk and computes deadlines at low cost.

A beginner’s guide to mining annecoin…
Mining annecoin runs on regular hard drives. No expensive ASICs, no noisy rigs, no industrial electricity bills, no oligarchy of staking. If you’ve got storage space and a bit of time, you can mine.
Before you start: You’ll need to run ANNE Hasher first. This pre-computes mathematical solutions and stores them on your drive, enabling your node to participate in mining. It’s a one-time process that can take hours or days depending on how much space you’re allocating. Think of it as filling your drive with Proof of Work hashes (plots) that your miner will later search.
Once that’s done, actual mining is low-energy and quiet. Your computer reads the plots you’ve computed with ANNE Hasher and you can add more at any time.
Two ways to mine:
- SOLO: Go alone. Allocate several terabytes, the more storage means better odds.
- SHARE: Pool with others. A few hundred GB can work here. Share miners compete for 7 equally split rewards every every 4 out of 5 blocks
Steps:
- Run ANNE Hasher to plot your drives.
- Open ANNE Miner, add your account ID in config, pick your tier, and point it to your plots.
- Let it run. Watch your balance grow.
Installed via ANNE Wizard? You’re already configured. Just plot your plots, then launch your miner and watch your balance grow.
Need help? The ANNE Talk forum is the place to ask.

Keeping mining fair: how ANNE prevents concentration…
Three mechanisms limit concentration, ensuring no single entity dominates.
First, a handicap rule: if a miner wins five of the last ten blocks, a 240-second penalty gets added to their deadline. This makes it harder for the same miner to win repeatedly.
Second, solo-only blocks: every fifth block is reserved exclusively for solo miners. Share pools cannot participate in these blocks at all, giving individual miners a regular shot at full rewards.
Third, the share tier itself: even when a large miner wins the solo portion, seven smaller miners still receive shares from the same block. Rewards get distributed across multiple participants rather than consolidating with the winner.
The result is a system where larger storage holders maintain a proportional advantage without excluding smaller participants.

Is 51% attack possible?
No. In Proof of Work, an attacker with majority hashrate can mine a private chain and later reveal it, rewriting history of the whole distributed network. In a distributed data network using PoST, this is structurally impossible.
Deadlines are tied to specific blocks. They cannot be precomputed for future blocks. When a miner finds a solution, they broadcast an intent. Other nodes begin a 30-second grace period upon first observing any valid intent for that block. During this window, they accept intents from other miners whose solutions may have arrived slightly later due to propagation delays. When the window closes, the node selects the intent with the lowest deadline. That block is final. Any intent received after the window closes is invalid, regardless of the quality of its deadline.
The grace period serves a transparency mandate. A miner who discovers a low deadline cannot withhold it for strategic advantage. If they keep it private, no grace period triggers and no node recognizes their claim. If they broadcast after the window closes for a competing intent, their submission arrives too late and is rejected. There is no benefit to secrecy, only the certainty of exclusion.
No majority attacker can retroactively produce a competing block for a height that has already passed. The moment is gone. You either played within the rules, or you’re out.

Keeping time in a serverless network…
There’s no universal clock. Instead, nodes establish network time by gossiping among themselves.
Your node talks to its peers and asks what time they think it is. If most of them agree, your node trusts that and stays put. If your clock is drifting, it adjusts to match the group. That’s your local time bubble.
These bubbles interconnect. Your peers have their own peers, who have theirs. Time propagates through the network like any other piece of gossip. A node in Europe may be in a different bubble than one in Australia, but the bubbles overlap. The result is that time converges network-wide with residual differences between bubbles.
The 30-second grace period exists partly to absorb these residual differences. A few seconds of skew between bubbles doesn’t matter. Only nodes in bubbles that drift more than 30 seconds from their peers fall out of consensus entirely, unable to participate until they resync.
You don’t need to configure your OS time through external sources such as NTP. An honest peer rarely drifts by more than two seconds, and with a properly synchronized clock will compute time offsets near zero, requiring no adjustment. If your OS clock is misaligned, your annode handles it automatically.

What is the emission schedule for annecoin?
A: Maximum supply is 2.2 trillion coins (denominated with two decimal places; 220 trillion annies). Emission occurs in three phases.
Primary mining phase: The first 397 lunar cycles (approximately 30 years). Block rewards start at 1 million coins and decrease by 1 percent each cycle.
Sunset phase: Cycles 398 through 690 (22 years). Rewards stabilize at 18,500 coins per block plus fees.
Fee-only phase: After cycle 690, no block rewards remain. Miners earn only transaction fees and firing fees from semantic references.
The ANNE Development Fund (ADF):
The ADF receives a portion of each block reward during primary and sunset phases. At genesis, the split is 70 percent to mining rewards and 30 percent to the ADF. The ADF’s percentage also decays by 1 percent each cycle, reaching 14.7 percent by cycle 154.
The ADF is not a pre-mine. Funds are created only as blocks are mined and distributed progressively. No coins exist before they are earned.
ADF funds support ecosystem development: API libraries, core protocol maintenance, documentation, community initiatives, and integration tools. For developers, this means paid bounties and grants for building on ANNE. For community members, it means funded projects, marketing efforts, and tools that make the network more useful.
The fund is a way for you to earn for contributing, not just for mining. It’s a public fund by design so anyone can observe the ADF account and track how coins are disbursed over time.

Ways to earn annecoin (beyond just mining)…
Annecoin rewards participation in the ANNE’s distributed data network through Layer 1 (L1) incentives for various roles. Whether you’re running a node, mining, creating data, or sponsoring content, there are or will be multiple ways to earn, all designed to be fair and decentralized.
Ways to Earn:
- Data Creation: Sponsor “neurons” (data points on persons, companies, etc.) and “relons” (relationships). Earn when your data is used or sponsored.
- Data Sponsorship: Use your annecoins to sponsor neurons in the ANNE hypergraph via the Annex app. When a sponsored neuron “fires” (gets connected/rated), you receive annecoin. Sponsorships can be “annexed” (transferred) with algorithmic pricing and boostie protection for gains. You either earn through “firing” fees or an algorithmic price of annex. No losers.
- Governance and Admin: Help manage ANNE’s worldview as a data admin; get paid per action for moderation or curation.
- Community Contributions: Participate as a developer, marketer, ANNE Talk forum contributor, or moderator, and earn through L1 streams for valuable input.
- Numiner: Mine and trade Numes at http://localhost:9116/aon.html, create Numestones, and win massive rewards (1 BILLION annecoins for the first!).
All roles incentivize ongoing work. Start by installing ANNE (see “Personal Server Setup: ANNE Node Installation Guide for Linux, macOS & Windows“), explore, or join ANNE Forum to discuss opportunities and collaborate.

ANTOR: moving files across the peer-to-peer network…
ANTOR is the file transfer protocol for the ANNE network. It enables direct distribution of files between ANNODEs without central coordination or trackers.
When a file is prepared for sharing, it is divided into fixed-size segments called ants, 512KB each by default. Every ant is individually hashed. A manifest file with the .antor extension is created containing the complete file structure: total size, overall digest, protocol version, and an ordered list of each ant with its position and hash. This manifest serves as the authoritative reference for both distribution and reassembly.
When a node requests a file, it first contacts known custodians to obtain the manifest. Once the manifest is received, the node begins parallel retrieval of ants from multiple sources, up to four concurrent transfers by default. Each arriving ant is validated against its recorded hash before acceptance. If validation fails, the ant is discarded and requested from a different source. Failed transfers automatically retry after a timeout interval.
As ants arrive and pass validation, they are stored locally. When all ants are obtained, the node reconstructs the file by concatenating the decoded segments in order and validating the result against the overall digest in the manifest. The complete file is then stored in the appropriate directory based on its classification: shared content, cached data, or private files.
The protocol’s reliance on multiple sources and independent segment verification makes transfers resilient to node churn and network interruptions. If any peer goes offline during transfer, remaining ants can be fetched from others.

Private file sharing with ANTOR: what you can do with it…
ANTOR incorporates multiple protective layers. All message exchanges use authenticated encryption based on a shared secret derived from the sender’s private key and the recipient’s public key via elliptic-curve Diffie-Hellman. This ensures both confidentiality and that the message originated from the claimed sender. No third party observing network traffic can determine what files are being transferred or their point of origin.
Access control operates at the file level. Content can be marked as local-only, meaning it will only be shared with peers explicitly approved by the node operator. An internal approval map tracks permitted peers. For local-only files, even the manifest is withheld unless the requester appears in this map. Public files remain available to any node.
Files integrate with the broader ANNE knowledge graph through metadata relons stored in the hypergraph. When a file is shared, its content hashes and manifest references become discoverable via hypergraph queries. A search for documents related to a specific topic returns not just references but the means to retrieve the actual files from peers who hold them.
Storage is organized into segregated directories on each ANNODE: a2a_neurons for shared content, a2a_cache for temporary data, and a2a_localonly for private files. All transfers use the same encrypted peer channels as other network communication, adding negligible overhead while inheriting the network’s established trust mechanisms.
The protocol supports a range of use cases: distribution of media and documents without central infrastructure, secure replication among authorized participants, and delivery of application components across the distributed data network. Files become addressable components of the shared knowledge base while remaining under their owner’s control.

The Alt Data Network: handling the data that doesn’t fit…
The Alt Data Network handles the kinds of distributed data that don’t fit into the structured world of neurons and relons.
1Schema is designed for semantic knowledge: connections between concepts, facts about things, relationships that can be expressed as triplets. But not everything fits that mold. A social media post, a configuration file, a blob of JSON, or proprietary application data, these are unstructured payloads. They need to be stored and transmitted, but they don’t need to be broken down into triplets and enshrined in the datachain forever.
The Alt Data Network fills this gap. It provides a request-response layer for applications within the distributed data network to exchange arbitrary payloads directly between nodes. When an application needs to fetch a post, submit a comment, or retrieve a configuration, it sends an alt data request. The request propagates through the network to nodes that have opted into handling that data type, and the response travels back along the same path.
The key distinction: metadata about which subnetworks exist, their type identifiers, and their schemas, is queried through the hypergraph and discoverable by anyone. But the actual payloads themselves only flow between nodes that have explicitly opted into participation. You can discover that a social application exists, but you cannot retrieve its posts unless your node has opted into the “social” data type.

Choosing what you support: opt-in explained…
Every node operator controls exactly which alt data network application data types their node participates in, through two simple lists in the ANNODE properties file.
The provider list specifies which data types your node can handle locally. For each type, you associate a script that gets invoked when a matching request arrives. If you want to run a social media node, you add for example “social.posts” to your provider list and point to a script that serves posts from local storage.
The whitelist specifies which data types your node is willing to accept and forward. Even if you don’t host content yourself, you can help the network by forwarding requests for types you support. You might add “weather.data” to your whitelist to relay requests to nodes that actually serve weather information.
When a request arrives, the node first checks its whitelist. If the type isn’t there, the request is rejected. If the type is whitelisted and the node is a provider for it, and the request is addressed to this node, the associated script executes and generates a response. Otherwise, the node forwards the request to its peers after appending its own identifier to a route list.
This opt-in model ensures you only incur local resource costs for applications you explicitly choose to support. A general-purpose node might provide several popular types. A private enterprise node might restrict itself to a handful of internal types, effectively creating a private distributed data network whose payloads never leave authorized participants.

1Schema, Alt Data, and ANTOR: how they work together…
These three protocols form a complete stack, each handling a different kind of data.
1Schema handles structured knowledge: facts, relationships, metadata. When you create something, its description, who made it, when, what it relates to, all of that lives in the hypergraph as neurons and relons. This is the permanent, shared understanding that every node can query.
The Alt Data Network handles live requests and responses. When an application needs something, it sends a request through the network to nodes that have opted into that data type. A provider node processes it and sends back a response. This is how applications talk to each other in real time.
ANTOR handles the actual files. When a response tells you a file is available, ANTOR fetches it, splitting it into chunks and grabbing them from multiple sources in parallel. This is how large payloads move efficiently across the distributed data network.
A concrete example: You open a social media app on your ANNODE. The app queries the hypergraph to discover what data types the social network uses. It then sends an alt data request asking for recent posts. A provider node responds with a list of posts, which may include ANTOR manifests for attached images. Your node then uses ANTOR to download those images from multiple peers simultaneously. When you post a reply, an alt data request carries your reply text to the distributed data network, and a relon may be created recording that you interacted with that post.
No single protocol tries to do everything. Each handles what it’s meant for, and together they form a complete decentralized stack.

Wen new Kuno?
The new version of the Kuno fundraising app is approximately 80% complete. It will initially continue as a centralized application, with plans for further development to achieve full decentralization within the ANNE network. This upgrade shifts Kuno from a simple donation platform to a full-fledged Monero crowdfunding system, inspired by platforms like Kickstarter.
Key upgrades:
- Project Creation Tools: Creators can set up detailed campaigns with funding goals, timelines, rich text descriptions with inline images, videos, carousel photos, and tiered rewards
- Automated and semi-automated ranking for visibility, promotion and demotion.
- Community Moderation Tools: A significant shift from centralized management to decentralizated administration.
- Trustless accounts and datawallet for creators crowdfund management.
- Taxonomy and categorization.
- Backer Engagement: Support projects, track progress via creator updates.
- Design Revamp: Big beautiful looks!
- Sovereign legal framework and jurisdiction protecting Kuno community moderators.
- Dedicated domain.
ANNE role:
Kuno will always be a platform dedicated to Monero-based crowdfunding. However, a peer-to-peer cash system has severe limitations when it comes to data capabilities, and it cannot serve as a tool for the full scope of decentralization. Initially, ANNE will deliver account management features. Post-launch plan is to introduce moderation features, rewards for moderators, community incentives, closed-loop feedback widget, distributed backend storage, and the decentralization of the application itself. To enhance security and enable decentralized administration, annecoin may be implemented as an optional or back-end mechanism (not for donations/pledges). This approach maintains a smooth Monero user experience and workflow without requiring the use of annecoin.

A social network that… actually works?
ANNE, through 1Schema, alt-data-network and antor protocols enables a new type of social distributed data network. This is a fundamental departure from platforms where reach is artificially limited. Not primarily by shadow banning, but by a centralized data design that simply cannot serve everyone’s full feed without algorithmic filtering.
ANNE’s 1Schema stores data in semantic triplets, which brings meaning + reasoning to what is currently mostly statistical pattern matching, a deeper understanding of conversation & influence.
Subject → Predicate → Object are the core ANNE units (neurons/relons), turning messy text/posts into structured, meaningful facts – eg “UserID -> criticizes -> Big Pharma” or “UserID -> discusses -> Monero -> advocates -> privacy”.
It is a very powerful and granular way for recommendations and post distributions throughout the network, hitting just the right “targets” and beyond, because the data query is virtually limitless in length with negligible cost, capable of firing a million neurons in a second.
It’s not “massive” because it’s run on localhost; individually, you’re not serving the globe but yourself. A PI can do it. You just happen to have the data either by default or opt-in (depending on what the data/files are), and the app just uses them or gets them from other opt-in peers.
Alt-data-network protocol enables us to distribute application-specific data across participating annodes in the distributed data network, a kind of data that does not conform to the 1Schema protocol, such as the full text of a post.
Antor protocol empowers the distribution and streaming of media content peer-to-peer.
Many have experienced seeing a “sad” social media post and want to leave feedback but are conflicted about putting a “heart”. Do you really love or like that? It is long overdue that we can provide feedback on how we feel and also what we truly think about the things we spend our time consuming.
With the unique FEELZ/OPINIONZ closed-loop feedback system, you can rate post neurons using emotive responses, thoughts, positions, and state beliefs. Check out an iteration of the FEELZ/OPINIONZ widget in the previously discussed Annex app.
If peer-to-peer music streaming reminds you of piracy, that’s not our goal. With ANNE, we can ensure artists own their work and receive micro-transaction payments through the platform.
Last but not least, ANNE Media is dedicated to integrating Monero into the platform. ANNE does not compete with any peer-to-cash systems. Rather, it serves as a platform for application decentralization.
Such a platform and its content will be fully distributed and decentralized, independent on DNS, or 3rd party providers or operators. Truly private, truly yours.

I want to help! How can I contribute?
ANNE welcomes annode operators, miners, software developers, marketers, data admins, ANNE Talk forum contributors, or moderators. Let’s talk at ANNE Forum. Register, comment on an existing topic, or post a new topic, or send PM to radanne.
If you’re a developer: install annode, browse the API docs at http://localhost:9116/api-doc, and get in touch. Let’s build something they can never shut down. Distributed data network has no off switch.

support the movement
Want to help build the future? Becoming an early supporter is one of the most direct ways to contribute. By swapping for annecoin you’re not just acquiring another token, you’re helping to build the distributed data infrastructure that makes personal sovereignty possible for everyone
You can choose from monero, bitcoin, bitcoin cash, litecoin, ethereum, solana, or tether, and through the development fund you get annecoin in return. The fund gets resources to keep building, you get a stake in the present and future you’re reading about.
Swaps are available at anne.network for anyone without a node, but if you’re already running your own annode, you can swap directly from localhost at http://localhost:9116/aon.html through the alt data network. Either way it’s the same peer to peer trade, permissionless and direct, your support keeps the community building.


