Why Aether has no substitute
If you're coming from .cursorrules, the next three sections are all you need. The deeper comparisons below are for engineers already using LangSmith / CLAUDE.md / etc.
Using .cursorrules today? Here's where Aether is strictly better.
| Dimension | .cursorrules | Aether | Winner |
|---|---|---|---|
| Activation model | Static rules · project-wide | Dynamic activation · on demand | ⟁ Aether |
| Weighting | on / off | [-1, 1] floating-point dials | ⟁ Aether |
| Negative concentration | impossible | linkedin=-0.8 actively repels | ⟁ Aether |
| Multi-persona composition | rules fight each other | orthogonal dimensions stack | ⟁ Aether |
| Measurable effect | unverifiable | fingerprint.py returns a number | ⟁ Aether |
| Self-critique / evolution | manual edits forever | critic → evolve → promote | ⟁ Aether |
| Cross-session memory | none | collapses/ + mirror/ | ⟁ Aether |
| Learning curve | zero · one file | one command · 30s | = tie |
Same goal · two ways of writing it
You want the AI to review code — be direct, be strict, drop the LinkedIn tone.
.cursorrules Hardcoded · rules interfere · no way to say "less of X" # .cursorrules
- Be direct and terse.
- Always mention security risks first.
- Use bullet points.
- Don't hedge ("maybe", "consider").
- Avoid LinkedIn-style marketing.
- When reviewing: severity matters.
- Don't offer optional suggestions.
- ...(add 20 more lines) // In your Cursor chat
activate linus-torvalds=0.9
activate engineering-rigor=0.9
activate linkedin=-0.8
// Verify it fired
python tools/fingerprint.py Three situations · time to leave .cursorrules behind
Any one of these hitting home = you've outgrown it.
Rules fight each other
You write "concise AND thorough" in .cursorrules — the model picks one every time, and which one depends on its mood.
Aether splits them into orthogonal dimensions: concision=0.8, thoroughness=0.7 both fire, no conflict.
activate concision=0.8, thoroughness=0.7 Style drift
Same ruleset — today the AI sounds like Linus, tomorrow like a LinkedIn copywriter. You have no idea whether the model drifted or your rule stopped firing.
Aether runs fingerprint.py and gives you a mathematical distance. How much did it shift? Numbers, not vibes.
python tools/fingerprint.py --last 10 Different domains need different voices
Writing frontend? You want Ive's restraint. Writing backend? You want Linus's bluntness. One .cursorrules file means editing it every single time.
Aether lets you switch personas inside one project — one line flips the weights. No file editing.
activate ive=0.8 # frontend mode
activate linus=0.9 # backend mode Wider view · 8-tool matrix
| Feature | .cursorrules | CLAUDE.md | LangSmith | Promptfoo | CLAUDE Projects | Letta | Anthropic Skills | Aether |
|---|---|---|---|---|---|---|---|---|
| Weighted activation | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Negative concentration | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Field composition | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Fingerprint eval | ❌ | ❌ | ⚠️ quality | ✅ | ❌ | ❌ | ❌ | ✅ style |
| Self-critique | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Self-evolution | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Persona import/export | ❌ | ⚠️ | ❌ | ❌ | ⚠️ | ✅ | ⚠️ | ✅ |
| Zero-dep deploy | ✅ | ✅ | ❌ | ⚠️ | ❌ | ❌ | ✅ | ✅ |
| Open source | ✅ | ✅ | partial | ✅ | ❌ | ✅ | ✅ | ✅ MIT |
Deep Dive · Per Competitor
Honest analysis · we acknowledge what each does well and what it can't do
.cursorrules
Cursor project-level hardcoded rules
✓ Strengths
- Zero setup · one file
- Deep Cursor integration
✗ Limits
- No weights · no stacking
- No negative repulsion
- No quantified evaluation
- Edit once = edit once
Aether extends cursorrules into a 16-dim stackable field space
CLAUDE.md
Anthropic single-file convention
✓ Strengths
- Standardized · cross-session
- Natively read by Claude ecosystem
✗ Limits
- One file, one style
- No multi-persona composition
- No feedback loop
Aether fields coexist with CLAUDE.md, adding a multi-dim weight layer
LangSmith
LangChain prompt observability
✓ Strengths
- Mature product · enterprise
- Full trace/dataset/eval stack
✗ Limits
- Evaluates quality/latency/cost, not style
- Doesn't handle field stacking
- Requires SaaS account
Aether does style alignment, LangSmith does quality eval. Complementary, not competing
Promptfoo
Open-source prompt testing framework
✓ Strengths
- CLI-friendly · CI-ready
- Multi-model comparison
✗ Limits
- Static tests only, no dynamic field activation
- No persona abstraction
Aether's fingerprint.py is similar in spirit but targets field fingerprints, not general quality
CLAUDE Projects
Anthropic session context
✓ Strengths
- Official support · cross-session knowledge
- Zero engineering cost
✗ Limits
- No multi-dim personas
- No negation
- Closed, non-portable
Aether goes with you. Projects are Anthropic-locked.
Letta (MemGPT)
AI long-term memory framework
✓ Strengths
- Flat memory pool · vector retrieval
- Research-oriented
✗ Limits
- Doesn't handle style/fields
- Vector-dependent
- Steep learning curve
Letta solves "remember more", Aether solves "style alignment". Theoretically composable.
Anthropic Skills
Anthropic official skill spec
✓ Strengths
- Standardized · officially blessed
- Ecosystem starting point
✗ Limits
- Dead files · no evolution
- No weighted composition
- Aether MANIFESTO explicitly says "post-skill"
Aether actively challenges the skill paradigm · architects the next abstraction (fields · species · collapse)
One sentence
Everyone else is building "a better prompt tool".
Aether is building "the next thing after prompts".
Fields + fingerprint + self-evolution — this combination exists exactly once in the world.