The Alt Data Network: Application-Specific Payload Distribution Across Opt-In Peer-to-Peer Subnetworks
A protocol for non-1Schema compliant payload distribution within self-organizing meshes, with subnetwork discovery via the neuromorphic hypergraph
Table of Contents
Abstract
The Alt Data Network serves as a specialized distribution protocol within the ANNE ecosystem, tailored for application-specific payloads that extend beyond the constraints of 1Schema’s structured semantics. It encompasses a variety of data types, including complete text outputs, configuration files, media assets, and proprietary application data.
The protocol operates on an opt-in basis, utilizing peer-to-peer subnetworks defined by hierarchical type identifiers. Each application can define its own custom types, which are then linked to processing logic, often implemented as scripts that run locally on participating nodes.
Node operators determine their participation by selecting the types they wish to provide or accept, which is configured through ANNODE properties. Once a node opts in, it gains the capability to initiate requests, manage incoming requests by executing the corresponding logic, and disseminate requests using a flood routing mechanism that includes return-path tracking. The neuromorphic hypergraph facilitates querying for metadata regarding available subnetworks, enabling comprehensive global discovery of applications.
Access to payload distribution is restricted to nodes that have explicitly opted into the relevant subnetwork, thus preserving node sovereignty and preventing unnecessary data duplication. This architecture enables decentralized applications to function with dedicated processing and distribution infrastructures, while ensuring the discoverability of the applications across the network.

ALT DATA NETWORK
peer-to-peer mesh capable of replacing many traditional client-server backends, enabling truly decentralized applications and services
I. Introduction: Purpose and Scope
The ANNE ecosystem comprises multiple specialized protocols that together form a complete decentralized infrastructure. At its core, the 1Schema protocol serves as the semantic backbone, representing relational knowledge through neurons and relons stored within a tamper-proof datachain. Complementing this, the ANTOR protocol facilitates efficient, chunked file transfer for payload distribution using swarm-based retrieval mechanisms. The alt data network fulfills the remaining functional requirements by enabling application-specific payload distribution that diverges from the 1Schema triplet model, bypassing the overhead associated with ANTOR chunking. This network offers a request-response framework for payloads necessary for storage, retrieval, or processing by applications.
1Schema does not directly manage payload storage. Instead, it catalogues metadata pertaining to subnetworks as relons within the datachain. The neuromorphic hypergraph, which resides in memory, is constructed from these relons and is leveraged for querying information on available subnetworks. Any ANNODE can query this hypergraph to identify existing applications; however, access to payload distribution within these subnetworks is limited to nodes that have actively opted into participation. Nodes that have not subscribed to a specific subnetwork are unable to discover or access its payloads; therefore, the discoverability and distribution capabilities are confined to merely the existence and descriptive information of the subnetwork itself.
The Alt Data Network protocol is defined by several core characteristics:
- Typed Payloads: All data is associated with a hierarchical type identifier, allowing applications to define their own namespaces.
- Opt-In Participation: Node operators configure which types they provide (by associating local processing logic) and which they accept for forwarding. Only participating nodes handle or forward messages for a given type of payload distribution.
- Flood Routing with Return‑Path Tracking: Requests are propagated to all eligible peers, with each node appending its identifier to a route list. Responses travel back along the reverse of that route, enabling efficient delivery without global knowledge.
- Adaptive Forwarding: Nodes maintain performance statistics per peer and per type, allowing them to probabilistically skip peers that have historically been slow or unresponsive, improving network efficiency.
- Route Learning: Successful return routes are cached for future use, enabling subsequent responses to follow learned optimal paths.
- Hypergraph-Based Discovery: The neuromorphic hypergraph can be queried for metadata about all subnetworks, enabling global discovery of available applications.
- Local-First Processing: Requests are handled by invoking pre-configured scripts, allowing arbitrary application logic without centralized coordination.
II. Architectural Components
II.I. Data Types and Payload Distribution Structure
The Alt Data Network protocol accepts any payload that does not conform to the 1Schema data model. This category includes binary blobs, JSON or XML documents, media files, application configuration data, and proprietary formats. Each payload is associated with a hierarchical type identifier, typically composed of a primary type and an optional subtype. These identifiers form namespaces that applications use to define their own data categories. For example, a social media application might use type “social” with subtypes “post”, “comment”, and “profile”.
Every request is encapsulated in a JSON object that includes the payload itself, a unique request identifier used for deduplication, an optional target node identifier, and various flags controlling propagation behavior (e.g., whether a response should be broadcast to all peers or follow the return route). The request identifier is generated by the originating node and must be unique to prevent replay and duplicate processing across the data network.
II.II. Node Roles and Configuration
Each ANNODE that participates in the alt data network maintains two configuration lists that determine its role:
- Provider list: The set of data types for which this node can handle requests locally. For each such type, the node has an associated script (or other logic) that is invoked when a request arrives and the target matches the node’s own identifier. The script receives the request payload and produces a response, which is then returned via the network.
- Whitelist: The set of data types that this node is willing to accept and forward. Incoming requests for types not in the whitelist are rejected, and the rejecting node may inform the sender of its whitelist to aid in future routing decisions.
These lists are populated based on the node operator’s preferences. A node may choose to provide certain types (acting as a server for that application) while only forwarding others. This opt‑in model ensures that nodes only incur resource costs for the applications they explicitly support.
II.III Request Propagation
When a node originates a payload distribution request, it first determines the target. If the request specifies a particular node identifier and that node is a known peer, the request is sent directly. Otherwise, the request is broadcast to all peers that have the alt data helper enabled and have not already seen this request. To prevent infinite loops, each node maintains a per‑peer cache of recently seen request identifiers; if a request with the same identifier arrives from the same peer, it is silently ignored.
Upon receiving a request, a node performs the following steps:
- Verifies that the data type is in its whitelist; if not, returns a soft error.
- Checks its local request identifier cache; if already processed, discards the request.
- If the node is a provider for this type and the target (if specified) matches the node’s own identifier, it handles the request locally by invoking the associated script, generating a response.
- Otherwise, it forwards the request to all its eligible peers (those with alt data helper enabled) after appending its own identifier to a route list contained in the request. Before forwarding, it may consult performance statistics to decide whether to skip a particular peer (see Section on Adaptive Forwarding).
II.IV. Response Handling and Return Routing
Upon completion of payload distribution processing, a provider node generates a response in JSON format. This response encompasses the original request identifier, the result data (potentially segmented into chunks for large payloads), and may include either a designated return target identifier or an encryption target. The response is transmitted back along the recorded route of the request; specifically, the node eliminates its own identifier from the end of the route list and forwards the response to the preceding node in the chain. This process continues iteratively, with each node passing the response to its predecessor until it reaches the originator.
In cases where the original request included a flag for broadcast propagation, the response is dispatched to all relevant peers. This approach is particularly useful for disseminating payloads that a broad range of nodes may need.
Nodes that receive the response are responsible for verifying signatures (if mandated), deduplicating based on the request identifier, and potentially publishing the response to local WebSocket subscribers for real-time delivery to applications. They subsequently route the response according to established forwarding protocols, unless they are the final destination.
II.V. Adaptive Forwarding and Route Learning
To improve efficiency, each node maintains per‑peer and per‑type performance counters: successful interactions (where a peer accepted and presumably forwarded a request) and failures (timeouts or explicit errors). These counters feed into a probabilistic forwarding decision: peers with a high failure rate are skipped more often, reducing unnecessary traffic. This adaptive behavior helps the alt data network self‑tune as peers join, leave, or exhibit degraded responsiveness (e.g., increased latency or failure rates).
Additionally, nodes cache successful return routes. When a response is successfully delivered via a particular route, the node stores a representation of that route. Subsequent responses destined for the same target node and data type can use the best known route (i.e., the one that has proven successful in the past) instead of relying solely on the path recorded in the request. This route learning mechanism gradually builds efficient paths through the network, reducing the need for flooding over time.
III. Application-Specific Subnetworks: Developer Workflow and User Opt-In
A fundamental architectural principle of the Alt Data Network is that participation is opt‑in at the subnetwork level. No single ANNODE is required to handle every data type defined by every application. Instead, developers define types and associate them with processing logic, and node operators choose which types to support.
III.I. Developer Workflow
A developer creating a new distributed application follows this process to establish an alt data network. First, they define one or more hierarchical type identifiers that will identify their application’s data. They then implement a script (or other executable) that processes incoming requests for those types.
The script receives the request payload as input and must output a response. The developer also defines the schema and metadata for the subnetwork, which are recorded as relons in the datachain; the neuromorphic hypergraph, built from these relons, can then be queried for subnetwork discovery.
The ANNE ecosystem provides templates and examples for common scripting languages. The developer places the script on their node and configures the node to associate it with the appropriate type identifiers via its properties. The script can be distributed to other node operators who wish to become providers for the same subnetwork, either manually or through an application store subnetwork.
III.II. Payload Distribution: User Opt-In Mechanism
Node operators control participation in alt data subnetworks through their ANNODE properties file. The properties file contains two lists: one for types the node provides (i.e., for which it will execute local scripts), and one for types it accepts (i.e., will forward). For example, an operator might list “social_posts” in both lists if they want to both host content and forward requests for that application, while listing “weather_data” only in the accept list if they are willing to relay but not serve.
When an ANNODE starts, it reads these properties and initializes its internal state accordingly. Nodes that are providers will invoke the associated scripts when they receive matching requests. Nodes that only accept a type will forward requests but not handle them locally.
This opt‑in mechanism ensures that nodes only incur resource costs for the applications they explicitly choose to support. A node running a general-purpose ANNODE might provide several widely used types, while a node operated by an enterprise might restrict its participation to a limited set of types relevant to its business.
By controlling its whitelist and provider configuration, and optionally refraining from publishing type metadata to the hypergraph, an enterprise can effectively create a private subnetwork whose payloads are only exchanged among authorized nodes. Encryption of payloads, when used, further ensures that data remains confidential even if forwarded through intermediate nodes.
III.IV. Example: Decentralized Application Store
A concrete illustration of this payload distribution workflow is a decentralized application store. The store operator defines a primary type “appstore” with subtypes for specific functions: “appstore.list” for retrieving available applications, “appstore.get” for downloading application packages, and “appstore.review” for posting user reviews. The operator records metadata about these types in the datachain, making them discoverable via the neuromorphic hypergraph. Any node can query the hypergraph to learn that an application store exists and what types it uses.
The store operator runs an ANNODE configured as a provider for these types, with scripts that serve application metadata and application packages (e.g., zip archives containing the web application’s files, configuration, and dependency manifests). Nodes that opt into the store subnetwork by adding “appstore” to their accept list receive and store these data tables locally as part of the peer‑to‑peer payload distribution mechanism. They may also choose to become providers by obtaining and configuring the store’s scripts, effectively becoming mirrors that help distribute the application packages.
A user who wishes to browse the store first queries their local ANNODE’s hypergraph to discover the store’s type identifiers. They then add “appstore” to their node’s accept list via the ANNODE properties. Once configured, the user’s node can originate requests.
When the user requests an application listing, their node constructs a request with type “appstore.list” and a unique request identifier. The request for payload distribution propagates through the alt data network, with each forwarding node appending its identifier to the route list. When the request reaches a provider node (the store operator or a mirror), that node invokes its associated script, which returns the listing from its locally stored metadata.
The response travels back along the accumulated route to the user’s node, which may publish it via websocket to a browser‑based interface displaying the available applications. When the user selects an application to install, their node sends a request with type “appstore.get” and the application identifier.
The provider node responds with the application package. Upon receiving the package, the user’s node may automatically execute post‑installation scripts: unpacking the archive, installing required dependencies (e.g., PHP libraries, Node.js modules), and configuring the application to run on the local ANNODE. The node may also update its properties to opt into any additional subnetworks that the application requires for payload distribution, such as a shared data type for user‑generated content. The user thus becomes an active participant in the application’s ecosystem with minimal manual intervention.
III.V. Isolation and Resource Management
Although requests from different data types for payload distribution flow through the same peer connections, isolation is maintained by:
- Per‑type whitelist and provider checks, ensuring nodes only process or forward messages for types they have opted into.
- Per‑peer request identifier tracking, preventing duplicate processing.
- Performance counters maintained per type and per peer, enabling adaptive behavior without cross‑type interference.
This design ensures that a misconfigured or malicious subnetwork cannot overwhelm nodes that do not wish to participate in payload distribution, and that resource consumption remains under the operator’s control.
IV. Integration with the ANNE Ecosystem
The Alt Data Network operates in close coordination with other ANNE components, each serving a distinct role. The neuromorphic hypergraph is queried for metadata about alt data subnetworks, including their identifiers and schemas, which are stored as relons in the datachain. This enables global discovery of available applications. However, the hypergraph does not contain per‑payload information; that is confined to the nodes that handle requests.
The Alt Data Network and ANTOR can be used in tandem: an alt data request may return a response containing an ANTOR manifest, prompting the client to retrieve the associated file via ANTOR’s swarm‑based transfer protocol. This separation of concerns allows each protocol to focus on its core function – request‑response messaging for the Alt Data Network and efficient file distribution for ANTOR. All metadata transactions recorded in the datachain are secured by the PoST consensus layer, ensuring that references to subnetworks are immutable and verifiable.
Client applications interact with the Alt Data Network through the ANNODE HTTP(s) API. A typical request flow proceeds as follows:
- The application sends an HTTP request to its local ANNODE, specifying the data type identifier and a payload containing the request details and a unique request identifier.
- The local ANNODE propagates the request through the network as described above.
- When a provider node handles the request, it invokes the configured script, which produces a response.
- The response travels back along the route and is delivered to the originating node, which may publish it via websocket to the waiting application.
Nodes that have not opted into the relevant data type cannot participate in request handling or forwarding for that type.
V. Incentives and Economic Model
The Alt Data Network itself does not prescribe or enforce any particular incentive mechanism. Instead, it provides the foundational messaging and payload distribution layer upon which application developers can implement custom economic models by leveraging the broader ANNE ecosystem.
Since all requests and responses are typed and carry verifiable signatures, developers can design protocols that issue payments, track usage, or reward participants using 1Schema relons and the PoST datachain. For example, a provider node could include payment instructions in its response, or a store subnetwork could require micropayments for application downloads. The flexibility of the architecture allows any economic model, e.g. subscription-based, pay-per-request, staking, or reputation systems, to be built atop the core messaging layer without requiring changes to the protocol and the payload distribution mechanism itself.
VI. Technical Foundations
The Alt Data Network synthesizes established distributed systems concepts, adapted for opt‑in application subnetworks. Flood routing with return‑path tracking ensures that messages can reach any node without requiring a global routing table. Route caching and probabilistic forwarding deliver payload distribution efficiency over pure flooding. Content‑addressable request identifiers and cryptographic signatures provide integrity and prevent replay attacks. The use of per‑peer request identifier tracking and whitelists enforces access control and prevents unnecessary processing.
From a complex systems perspective, the Alt Data Network exhibits small‑world properties: the combination of flooding and caching allows nodes to learn efficient paths over time, while the probabilistic forwarding reduces redundant traffic. This topology provides fault tolerance and adaptivity as nodes join and leave.
VII. Applications and Use Cases
The Alt Data Network enables a range of decentralized applications, each operating its own data type. Social and content platforms use types for posting and retrieving user‑generated content. Decentralized application stores use types for listing and distributing apps. IoT and sensor networks use types for publishing sensor readings.
Organizations can operate restricted subnetworks by defining custom type identifiers and configuring their nodes to accept and provide only those types, with access limited through selective whitelisting and optional payload encryption. Research institutions can create types for sharing large datasets, with scripts that serve data from local storage. Any application that requires payload distribution through request‑response processing without central servers can leverage the Alt Data Network.
VIII. Closing Remarks
The Alt Data Network provides the request‑response data layer essential to a complete decentralized stack, operating alongside the semantic foundation of 1Schema and the file distribution capabilities of ANTOR. By handling payload distribution that do not fit the 1Schema model, it enables applications to operate without reliance on centralized infrastructure.
The opt‑in subnetwork architecture ensures that participation remains sovereign: node operators choose which data types to support, developers can define custom processing logic via scripts, and the network scales through cooperative forwarding and route learning.
The neuromorphic hypergraph is queried for discovering available subnetworks, but actual request handling is confined to nodes that have explicitly opted in. Through flood routing with return‑path tracking, adaptive forwarding, and cryptographic verification, the Alt Data Network protocol ensures that data requests are processed reliably and responses are delivered back to the originator. In conjunction with the hypergraph, ANTOR, and PoST consensus, it forms an integrated ecosystem where all data, whether semantic or unstructured, can be stored, discovered, and transferred through peer‑to‑peer relationships and explicit user consent.
Browse ANNE Library

Support
ANNE Media is a sovereign non-profit organization. All of our expenses are funded by user donations. If you appreciate our efforts toward a free and sovereign web, please consider supporting us.
Thank you kind sir or ma’am

84VrmTNQq4hbfBQce5LfUe8BHmBFSDKHHFcSws6FRa9oiDUQANBkRnKYChabe9HRYUVAu9tcojXNFJL484KQPdJFCxRecbP

