Logo
Getting Started
Tools CLI
Client API
Server
E2EE Protocol
Getting Started

Welcome To Neuronum

The Neuronum SDK provides everything you need to set up your favorite AI model as self-hosted work environment that can be managed and called from our official client "kybercell™ - Your Private AI Workspace" (Windows & Android) or the Neuronum Client API.

⚠️ Development Status: The Neuronum SDK is currently in beta and is not production-ready. It is intended for development, testing, and experimental purposes only. Do not use in production environments or for critical applications.

Requirements

  • Python >= 3.8
  • Linux/NVIDIA GPU: CUDA-compatible GPU + CUDA Toolkit
  • macOS Apple Silicon: Ollama

Create a Neuronum ID

Setup and activate a virtual environment

Bash
python3 -m venv ~/neuronum-venv
source ~/neuronum-venv/bin/activate

Install the Neuronum SDK

Bash
pip install neuronum==2026.01.0.dev2

Note: Always activate this virtual environment (source ~/neuronum-venv/bin/activate) before running any neuronum commands.

Create Neuronum ID (called Cell)

Bash
neuronum create-cell

Setup a private AI Workspace

Install & start the Workspace Server

Bash
neuronum start-server

Stopping the Workspace Server

Bash
neuronum stop-server

Server Configuration: For model settings, file paths, and advanced options, see the Server documentation.

Call your Workspace Agent

Manage and call your Agent with "kybercell" (official Neuronum Client) or build your own custom Client using the Neuronum Client API.

Python API

Python
import asyncio
from neuronum import Cell

async def main():

    async with Cell() as cell:

        # ============================================
        # Target Cell ID
        # ============================================
        cell_id = "id::cell"

        # ============================================
        # Core Methods
        # ============================================
        # cell.activate_tx(cell_id, data)  - Send request and wait for response
        # cell.stream(cell_id, data)       - Send request via WebSocket (no response)
        # cell.sync()                      - Receive incoming requests
        # cell.tx_response(transmitter_id, data, public_key)  - Send response to a request

        # ============================================
        # Example: Send a prompt to your Agent
        # ============================================
        prompt_data = {
            "type": "prompt",
            "prompt": "Show me our sales performance"
        }
        tx_response = await cell.activate_tx(cell_id, prompt_data)
        print(tx_response)

if __name__ == '__main__':
    asyncio.run(main())

Full API Reference: For all examples including action approval, tool management, audit logs, and receiving requests, see the Client API documentation.

Create a Workspace Tool

Neuronum Workspace Tools are MCP-compliant (Model Context Protocol) plugins that extend your Agent's functionality. See the full Tools documentation for details.

Bash
neuronum init-tool

Next Steps

Explore more capabilities:

  • Client API - Full API reference with all examples
  • Tools CLI - Create custom MCP-compliant tools to extend your Agent's functionality
  • Server Configuration - Customize your server settings and model parameters
  • E2EE Protocol - Learn how Neuronum keeps your data secure

Need Help? For more information, visit the GitHub repository or contact us.

Neuronum Tools CLI

Create a Workspace Tool

Neuronum Workspace Tools are MCP-compliant (Model Context Protocol ) plugins that can be installed on the Neuronum Workspace Server and extend your Agent's functionality, enabling it to interact with external data sources and your system.

Tools Note: Tools are not stored encrypted on neuronum.net. Do not include credentials, API keys, secure tokens, passwords, or any sensitive data directly in your tool code. Use environment variables or the variables configuration field (when available) to handle sensitive information securely.

Requirements

  • Python >= 3.8

Connect To Neuronum

Installation

Create and activate a virtual environment:

Bash
python3 -m venv ~/neuronum-venv
source ~/neuronum-venv/bin/activate

Install the Neuronum SDK:

Bash
pip install neuronum==2026.01.0.dev2

Note: Always activate this virtual environment (source ~/neuronum-venv/bin/activate) before running any neuronum commands.

Create a Neuronum Cell (secure Identity)

Bash
neuronum create-cell

Connect your Cell

Bash
neuronum connect-cell

Initialize a Tool

Bash
neuronum init-tool

You will be prompted to enter a tool name and description (e.g., "Test Tool" and "A simple test tool"). This will create a new folder named using the format: Tool Name_ToolID (e.g., Test Tool_019ac60e-cccc-7af5-b087-f6fcf1ba1299)

This folder will contain 2 files:

  • tool.config - Configuration and metadata for your tool
  • tool.py - Your Tool/MCP server implementation

Example tool.config:

Config
{
  "tool_meta": {
    "tool_id": "019ac60e-cccc-7af5-b087-f6fcf1ba1299",
    "version": "1.0.0",
    "name": "Test Tool",
    "description": "A simple test tool",
    "audience": "private",
    "auto_approve": false,
    "logo": "https://neuronum.net/static/logo_new.png"
  },
  "legals": {
    "terms": "https://url_to_your/terms",
    "privacy_policy": "https://url_to_your/privacy_policy"
  },
  "requirements": [],
  "variables": []
}

Example tool.py:

Python
from mcp.server.fastmcp import FastMCP

# Create server instance
mcp = FastMCP("simple-example")

@mcp.tool()
def echo(message: str) -> str:
    """Echo back a message"""
    return f"Echo: {message}"

if __name__ == "__main__":
    mcp.run()

Tool Configuration Fields

audience

Controls who can install and use your tool

  • "private" - Only you can use this tool
  • "public" - Anyone on the Neuronum network can install this tool
  • "id::cell" - Share with specific cells (comma-separated list)
JSON
"audience": "private"
JSON
"audience": "public"
JSON
"audience": "acme::cell, community::cell, business::cell"

auto_approve

Controls whether tool execution requires operator approval.

  • false (default) - The agent proposes the tool action and waits for the operator to approve or decline before executing
  • true - All tools in the script execute immediately without requiring operator approval (useful for read-only tools like search or information lookups)
  • ["tool_name_1", "tool_name_2"] - Only the listed tools are auto-approved, all other tools in the script require approval (useful when a script contains both read-only and write operations)
JSON
"auto_approve": false
JSON
"auto_approve": true
JSON
"auto_approve": ["view_meetings"]

page (tool return value)

Tools can optionally return a "page" key in their result to specify which HTML template the server should render and serve to the client. The returned data from the tool is passed into the Jinja2 template, so all keys in the tool's return dict are available as template variables. If no "page" key is returned, the server defaults to serving index.html.

Example tool returning a page with dynamic data:

Python
@mcp.tool()
def view_orders(status: str = "pending", operator: str = None) -> dict:
    """View orders filtered by status"""
    orders = [{"id": 1, "item": "Laptop", "status": "pending"},
              {"id": 2, "item": "Monitor", "status": "pending"}]
    return {
        "success": True,
        "page": "orders.html",
        "total_orders": len(orders),
        "orders": orders
    }

Example Jinja2 template (templates/orders.html):

HTML
<h1>Orders ({{ total_orders }})</h1>
{% for order in orders %}
<div>
  <p>#{{ order.id }} - {{ order.item }} ({{ order.status }})</p>
</div>
{% endfor %}

requirements

List of Python packages your tool needs. Automatically installed by the Neuronum Server when the tool is added. Use the same format as pip requirements.

JSON
"requirements": [
  "requests",
  "pandas>=2.0.0",
  "openai==1.12.0"
]

variables

List of variable names that users need to provide when installing your tool. When installing the tool, users are prompted to manually set each variable one by one. Values are sent encrypted to the server and automatically placed into your tool.py code.

Important: You don't need to add lines like API_TOKEN = "value" to your tool.py - the server automatically sets these variables based on user inputs.

JSON
"variables": [
  "API_TOKEN",
  "DB_PASSWORD",
  "SERVICE_URL"
]

How to use variables in your tool.py:

Wrong - Don't hardcode sensitive values:

Python
from mcp.server.fastmcp import FastMCP
import requests

mcp = FastMCP("api-tool")

# DON'T DO THIS - Never hardcode credentials!
API_TOKEN = "sk-1234567890abcdef"  # This will be exposed!

@mcp.tool()
def call_api(endpoint: str) -> str:
    """Call external API"""
    response = requests.get(f"https://api.example.com/{endpoint}",
                           headers={"Authorization": f"Bearer {API_TOKEN}"})
    return response.text

Correct - Use variables (server auto-injects values):

First, declare the variable in your tool.config:

JSON
{
  ...
  "requirements": ["requests"],
  "variables": ["API_TOKEN"]
}

Then use it in your tool.py without defining it:

Python
from mcp.server.fastmcp import FastMCP
import requests

mcp = FastMCP("api-tool")

# The server automatically sets API_TOKEN based on user input
# You just use it directly - no need to define it!

@mcp.tool()
def call_api(endpoint: str) -> str:
    """Call external API"""
    response = requests.get(f"https://api.example.com/{endpoint}",
                           headers={"Authorization": f"Bearer {API_TOKEN}"})
    return response.text

Note: This feature is only available when using the official Neuronum client.

Update a Tool

After modifying your tool.config or tool.py files, submit the updates using:

Bash
neuronum update-tool

Delete a Tool

Bash
neuronum delete-tool

Need Help? For more information, visit the GitHub repository or contact us.

Neuronum Client API

Call your Workspace Agent

Manage and call your Agent with "kybercell" (official Neuronum Client) or build your own custom Client using the Neuronum Client API.

Python API

Python
import asyncio
from neuronum import Cell

async def main():

    async with Cell() as cell:

        # ============================================
        # Target Cell ID
        # ============================================
        cell_id = "id::cell"

        # ============================================
        # Core Methods
        # ============================================
        # cell.activate_tx(cell_id, data)  - Send request and wait for response
        # cell.stream(cell_id, data)       - Send request via WebSocket (no response)
        # cell.sync()                      - Receive incoming requests
        # cell.tx_response(transmitter_id, data, public_key)  - Send response to a request

        # ============================================
        # Example 1: Send a prompt to your Agent
        # ============================================
        # The agent routes your message to the appropriate tool
        # and returns the result with an optional HTML view
        prompt_data = {
            "type": "prompt",
            "prompt": "Show me our sales performance"
        }
        tx_response = await cell.activate_tx(cell_id, prompt_data)
        print(tx_response)

        # ============================================
        # Example 2: Action Approval Flow
        # ============================================
        # When the agent suggests a write action, it returns an action_id
        # The client can then approve or decline the action

        # Approve a pending action
        approve_data = {
            "type": "approve",
            "action_id": 123  # ID returned from prompt response
        }
        tx_response = await cell.activate_tx(cell_id, approve_data)
        print(tx_response)

        # Decline a pending action
        decline_data = {
            "type": "decline",
            "action_id": 123
        }
        tx_response = await cell.activate_tx(cell_id, decline_data)
        print(tx_response)

        # ============================================
        # Example 3: Index (Welcome Page)
        # ============================================

        # Get the index/welcome page
        get_index_data = {"type": "get_index"}
        index = await cell.activate_tx(cell_id, get_index_data)
        print(index)

        # ============================================
        # Example 4: Tool Management
        # ============================================

        # List all available tools on Neuronum network
        available_tools = await cell.list_tools()
        print(available_tools)
        # Returns list of tools with metadata: [{"tool_id": "...", "name": "...", "description": "..."}, ...]

        # Get all installed tools on your agent
        get_tools_data = {"type": "get_tools"}
        tools_info = await cell.activate_tx(cell_id, get_tools_data)
        print(tools_info)
        # Returns: {"tools": {"tool_id": {config_data}, ...}}

        # Install a tool (requires tool to be published)
        # Use stream() instead of activate_tx() to listen for agent restart
        install_tool_data = {
            "type": "install_tool",
            "tool_id": "019ac60e-cccc-7af5-b087-f6fcf1ba1299",
            "variables": {"API_TOKEN": "your-token"}  # Optional: tool variables
        }
        await cell.stream(cell_id, install_tool_data)
        # Agent will restart and send "ping" when ready

        # Delete a tool
        delete_tool_data = {
            "type": "delete_tool",
            "tool_id": "019ac60e-cccc-7af5-b087-f6fcf1ba1299"
        }
        await cell.stream(cell_id, delete_tool_data)
        # Agent will restart after deletion

        # ============================================
        # Example 5: Actions Audit Log
        # ============================================

        # Get all actions (audit log)
        get_actions_data = {"type": "get_actions"}
        actions = await cell.activate_tx(cell_id, get_actions_data)
        print(actions)
        # Returns list of actions with status, tool info, timestamps, etc.

        # ============================================
        # Example 6: Agent Status
        # ============================================

        # Check if agent is running
        status_data = {"type": "get_agent_status"}
        status = await cell.activate_tx(cell_id, status_data)
        print(status)  # Returns: {"json": "running"}

        # ============================================
        # Example 7: Receiving Requests (Server-side)
        # ============================================

        # Listen for incoming requests using sync()
        async for transmitter in cell.sync():
            data = transmitter.get("data", {})
            message_type = data.get("type")

            # Send encrypted response back to the client
            await cell.tx_response(
                transmitter_id=transmitter.get("transmitter_id"),
                data={"json": "Response message"},
                client_public_key_str=data.get("public_key", "")
            )

if __name__ == '__main__':
    asyncio.run(main())

Need Help? For more information, visit the GitHub repository or contact us.

Neuronum Server

About Neuronum Server

Neuronum Server is an agent-wrapper that transforms your model into an agentic backend server that can interact with "kybercell" (official Neuronum Client) or the Neuronum Client API and installed tools.

Requirements

  • Python >= 3.8
  • Linux/NVIDIA GPU: CUDA-compatible GPU + CUDA Toolkit
  • macOS Apple Silicon: Ollama

Connect To Neuronum

Installation

Create and activate a virtual environment:

Bash
python3 -m venv ~/neuronum-venv
source ~/neuronum-venv/bin/activate

Install the Neuronum SDK:

Bash
pip install neuronum==2026.01.0.dev2

Note: Always activate this virtual environment (source ~/neuronum-venv/bin/activate) before running any neuronum commands.

Create a Neuronum Cell (secure Identity)

Bash
neuronum create-cell

Connect your Cell

Bash
neuronum connect-cell

Start the Server

Bash
neuronum start-server

This command will:

  • Clone the neuronum-server repository (if not already present)
  • Detect your hardware platform (Apple Silicon or NVIDIA GPU)
  • Create a Python virtual environment
  • Install platform-specific dependencies
  • On Apple Silicon: Verify Ollama is installed, start the Ollama server, and pull the configured model
  • On NVIDIA GPU: Start the vLLM server in the background and wait for model loading
  • Launch the Neuronum Server

Check Server Status

Bash
neuronum status

This will show if the Neuronum Server and vLLM Server are currently running with their PIDs.

Viewing Logs

Bash
# Main server log
tail -f neuronum-server/server.log
# vLLM log (NVIDIA GPU only)
tail -f neuronum-server/vllm_server.log

Stopping the Server

Bash
neuronum stop-server

What the Server Does

Once running, the server will:

  • Connect to the Neuronum network using your Cell credentials
  • Initialize a local SQLite database for conversation memory and auto-indexes files in the templates/ directory
  • Auto-discover and launch any MCP servers in the tools/ directory
  • Process messages from clients via the Neuronum network

Server Configuration

The server can be customized by editing the neuronum-server/server.config file. Here are the available options:

File Paths

Python
LOG_FILE = "server.log"              # Server log file location
DB_PATH = "agent_memory.db"          # SQLite database for conversations and knowledge
TEMPLATES_DIR = "./templates"        # HTML templates to auto-index on startup and serve

Model Configuration

Python
MODEL_MAX_TOKENS = 512               # Maximum tokens in responses (higher = longer answers)
MODEL_TEMPERATURE = 0.3              # Creativity (0.0 = deterministic, 1.0 = creative)
MODEL_TOP_P = 0.85                   # Nucleus sampling (lower = more predictable)

vLLM Server (NVIDIA GPU)

Python
VLLM_MODEL_NAME = "Qwen/Qwen2.5-3B-Instruct"  # Model to load
                                               # Examples: "Qwen/Qwen2.5-1.5B-Instruct",
                                               #           "meta-llama/Llama-3.2-3B-Instruct"
VLLM_HOST = "127.0.0.1"              # Server host (127.0.0.1 = local only)
VLLM_PORT = 8000                     # Server port
VLLM_API_BASE = "http://127.0.0.1:8000/v1"  # Full API URL

Ollama (Apple Silicon)

Python
OLLAMA_MODEL_NAME = "llama3.1:8b"    # Model to load
                                     # Examples: "llama3.2:3b", "qwen2.5:3b", "qwen2.5:7b"
OLLAMA_API_BASE = "http://127.0.0.1:11434/v1"  # Ollama API URL (default port: 11434)

Conversation

Python
CONVERSATION_HISTORY_LIMIT = 10      # Recent messages to include in context

After modifying the configuration, restart the server for changes to take effect:

Bash
neuronum stop-server
neuronum start-server

Need Help? For more information, visit the GitHub repository or contact us.

Neuronum E2EE Protocol

End-to-End Encrypted Communication

The Neuronum SDK is powered by an end-to-end encrypted communication protocol based on public/private key pairs derived from a randomly generated 12-word mnemonic. All data is relayed through neuronum.net, providing secure communication without the need to set up public web servers or expose your infrastructure to the public internet.

How It Works

1. Cell Creation & Key Generation

When you create a Neuronum Cell, a cryptographically secure 12-word mnemonic phrase is randomly generated. This mnemonic serves as the seed for deriving your public/private key pair.

  • Private Key: Stored locally on your device and never transmitted
  • Public Key: Registered with the Neuronum network as your Cell identity
  • Mnemonic: Your recovery phrase for regenerating keys on new devices

2. End-to-End Encryption

All messages sent through the Neuronum network are encrypted before transmission and can only be decrypted by the intended recipient:

  • Messages are encrypted using the recipient's public key
  • Only the recipient's private key can decrypt the message
  • The Neuronum relay server (neuronum.net) cannot read message contents
  • Your data remains private even as it passes through the network infrastructure

3. Relay Architecture

Instead of requiring you to configure firewalls, port forwarding, or public IP addresses, Neuronum uses a relay architecture:

  • Both clients and servers connect outbound to neuronum.net
  • The relay server forwards encrypted messages between your client and agent
  • No need to expose your Agent or infrastructure to the public internet
  • Works seamlessly behind NAT, firewalls, and corporate networks

Security Benefits

Privacy by Design

  • Zero-Knowledge Architecture: The relay server never has access to message contents
  • Client-Side Encryption: All encryption/decryption happens on your local device
  • No Metadata Collection: Minimal metadata is stored or logged
  • Self-Custody: You control your private keys and recovery mnemonic

Network Security

  • No Public Exposure: Your server remains behind your firewall
  • No Port Forwarding: All connections are outbound from your network
  • TLS Transport: Additional transport layer encryption for network traffic
  • Secure by Default: No configuration needed to achieve secure communication

Recovery & Access Control

  • Mnemonic Recovery: Restore your Cell on any device using your 12-word phrase
  • Device Independence: Access your Agent from multiple devices with the same Cell
  • Secure Backups: Your mnemonic is all you need to backup and restore access

Getting Started with Neuronum Cells

Create Your Cell

Creating a Cell generates your cryptographic identity:

Bash
neuronum create-cell

This command will:

  • Generate a secure 12-word mnemonic phrase
  • Derive your public/private key pair from the mnemonic
  • Register your public key with the Neuronum network
  • Store your encrypted private key locally

Important: Save your 12-word mnemonic phrase in a secure location. This is the only way to recover your Cell if you lose access to your device. Anyone with this phrase can access your Cell.

Connect Your Cell

Connect to the Neuronum network to start sending and receiving encrypted messages:

Bash
neuronum connect-cell

Need Help? For more information, visit the GitHub repository or contact us.