Skip to main content

Setup Guides

Mining and Validating on Subnet 46

Jump to Mining or Validating


Mining on Subnet 46

Earn rewards by submitting property valuation models that outperform existing solutions.

Why mine

Winning miners receive alpha token emissions daily from Subnet 46. Currently winner-take-all distribution.

Requirements

  • Machine learning expertise
  • Access to historical property data
  • Computational resources for model training
  • Hugging Face account for model storage
  • Bittensor wallet and registration on Subnet 46

Mining process

1. Develop your model

Build an ML model that predicts US property sale prices.

Data sources:

  • Public property records
  • MLS data (if licensed)
  • Open datasets (e.g., California housing data)

Approach suggestions:

  • Start with proven techniques (e.g., Kaggle competition winners)
  • Train models by state and ensemble
  • Innovation is up to you

Key considerations:

  • Model must run inference deterministically
  • Must handle standardized input format
  • Must produce predictions within reasonable time bounds

2. Upload to Hugging Face

Upload your model files to Hugging Face repository.

Requirements:

  • Model weights and architecture
  • Inference code
  • Dependencies specification
  • README with usage instructions

3. Register on-chain

Save your model hash on Subnet 46.

# Register model hash (specific command TBD)
# This timestamps your submission and proves ownership

The on-chain registration creates a verifiable timestamp proving submission time and ownership.

4. Validation cycle

24.5 hours after submission, validators begin benchmarking your model.

Process:

  1. Validators scrape recent home sales (last 24 hours)
  2. Run inference on your model with sale properties
  3. Compare predictions to actual sale prices
  4. Calculate accuracy (R² and potentially other metrics)
  5. Assign weights based on performance

5. Earn rewards

If your model achieves the highest accuracy, you receive emissions for that cycle.

Previous winning models continue evaluation. Your model competes against all active models, not just new submissions.

Best practices

Data quality: Clean, comprehensive training data improves model performance.

Feature engineering: Property characteristics, location data, temporal features all impact accuracy.

Model validation: Test on holdout data before submission to estimate performance.

Iteration: Submit improved models as you refine your approach.

Monitor performance: Track your model's ongoing validation results.

Getting started

  1. Set up Bittensor wallet:
pip install bittensor
btcli wallet create
  1. Register on Subnet 46:
btcli subnet register --netuid 46
  1. Review GitHub repository: github.com/resi-labs-ai/resi

  2. Join community: Discord and community links

Example workflow

Starting point: Apply California housing Kaggle competition techniques to nationwide data.

Steps:

  1. Gather training data
  2. Clean and preprocess
  3. Train model (consider state-by-state models)
  4. Validate locally
  5. Upload to Hugging Face
  6. Register on Subnet 46
  7. Monitor validation results
  8. Iterate and improve

Technical specifications

  • Model format: Hugging Face compatible
  • Input format: Standardized property data JSON
  • Output format: Predicted price in USD
  • Validation metric: R² (coefficient of determination)
  • Validation frequency: 24-hour cycles

Support


Validating on Subnet 46

Earn emissions by benchmarking submitted property valuation models.

Why validate

Validators earn alpha token emissions for maintaining network integrity. You provide the critical service of fairly evaluating miner submissions.

Requirements

Hardware

  • Sufficient compute to run inference on up to 256 models
  • Storage for model files and validation datasets
  • Reliable network connection

Technical

  • Ability to run validator software
  • Linux environment recommended
  • Bittensor wallet and registration on Subnet 46

Uptime

Consistent participation is important. Missing validation cycles reduces your validator weight and earnings.

Becoming a validator

Currently open registration. May become whitelist-only in future.

1. Set up Bittensor wallet

pip install bittensor
btcli wallet create

2. Register on Subnet 46

btcli subnet register --netuid 46

3. Run validator software

Clone the RESI repository and follow setup instructions:

git clone https://github.com/resi-labs-ai/resi
cd resi
# Follow setup instructions in README

4. Stay updated

Validators should monitor repository updates. We recommend:

  • Watch the GitHub repository
  • Join Discord for validator announcements
  • Enable notifications for validator-tagged pull requests

Validation process

Validators run a 24-hour cycle:

Hour 24.5: Data collection

Scrape home sales from the previous 24 hours. This ensures validation data was unavailable during model training.

Hour 27: Begin validation

Run inference on all submitted models.

Process:

  1. Download models from Hugging Face
  2. Verify model hashes match on-chain registration
  3. Run inference on validation dataset
  4. Calculate accuracy metrics
  5. Set weights based on performance

Anti-gaming measures

Validators implement multiple checks:

Duplicate detection: Identify identical models from different miners. Only first uploader receives rewards.

Timing verification: Ensure models execute within expected time bounds.

Cross-validator consensus: Compare results with other validators.

Baseline thresholds: Models must achieve minimum accuracy to receive full evaluation.

Performance measurement

Primary metric: R² (coefficient of determination)

Optional metrics: Additional accuracy measures may be added.

State-level analysis: Validation may measure per-state accuracy to reward universal models.

Weight setting

Validators set weights based on model performance. Currently winner-take-all distribution to top performer.

Weights must match across validators for consensus. This requires deterministic validation.

Efficiency optimization

Threshold system: Models achieving accuracy within 10% of previous day's top model receive full evaluation.

Early stopping: After evaluating 50% of properties, models performing significantly below threshold may be skipped.

This conserves resources while ensuring competitive models receive fair evaluation.

Validator operations

Daily cycle

  1. Wait for new models to be submitted
  2. Begin data scraping at hour 24.5
  3. Run validation at hour 27
  4. Set weights based on results
  5. Submit weights to Subnet 46
  6. Earn emissions proportional to stake

Monitoring

Track your validator performance:

btcli subnet metagraph --netuid 46

Troubleshooting

  • Review validator logs for errors
  • Check GitHub issues for known problems
  • Ask in Discord validator channel

Best practices

Keep software updated: Pull latest changes regularly.

Monitor resources: Ensure sufficient compute and storage.

Maintain uptime: Consistent participation maximizes earnings.

Verify consensus: Check that your weights match other validators.

Report issues: Help improve the network by reporting bugs.

Future opportunities

Validators may eventually run inference for the production API, providing paid oracle services using top-performing models.

Getting help