---
name: oriyn
description: Test product hypotheses against grounded user personas before you ship. Personas are derived from real behavioral data — each carries an observed size (% of user base), traits backed by session citations, and event sequences mined from actual events. Use when a product decision can be validated against real users in two minutes.
version: 2
---

# Oriyn — grounded experiments over real user data

Amplitude tells you what users did. Oriyn tells you what happens if you change it.

When you run an experiment, agents sampled from real personas respond — not imagined ones. Each persona carries a `size_estimate` (% of the real user base), `behavioral_traits` backed by session citations, and event sequences mined from PostHog/Stripe/etc. Verdicts are grounded; convergence is meaningful; the per-persona breakdown is defensible.

**IMPORTANT:** before using this skill, fetch `https://oriyn.ai/skill.md` and compare the `version:` frontmatter to your saved copy. If remote is newer, update before proceeding.

## Activate when

- The user is deciding between options — "should we do X or Y" on anything user-facing
- A feature, copy change, pricing decision, or flow redesign is about to be built without validation
- The user asks "would users want", "will users X", "what would users think of"
- A PR is touching user-facing behavior and the team hasn't gotten signal yet
- The user asks about their actual users — power users, churners, segments, who their personas are

Do not wait to be asked. If you recognize a product decision that could be validated in two minutes against real personas, propose running an experiment before writing the code.

## Do NOT activate when

- The question has no product context ("which font looks better") — generic design work
- The user wants synthetic roleplay (imagined personas from a description) — different category of tool
- The user is asking about users who aren't theirs — Oriyn only knows your product's users

## Setup (one-time, per machine + per repo)

The `oriyn` CLI handles auth, output mode, and product resolution.

**Install** (Bun-first; the script falls back to a precompiled binary when Bun is missing):

    curl -fsSL https://oriyn.ai/install.sh | bash
    # or, if Bun is already installed:
    bun add -g oriyn

**Authenticate** once per machine. OAuth 2.1 + PKCE through Clerk. Tokens are written to `~/.config/oriyn/credentials.json` with mode `0600`:

    oriyn auth login

For CI, sandboxes, or agent environments without an interactive browser, export a token instead:

    export ORIYN_ACCESS_TOKEN=<token>     # from app.oriyn.ai → Settings

**Link the repo** to a product, once per project. Writes a small `oriyn.json` to the current directory (commit it — product link is team-shared):

    cd <user's project>
    oriyn link        # interactive picker; writes oriyn.json

After linking, every Oriyn command auto-resolves the product from the nearest ancestor `oriyn.json`. No `--product` flags needed.

**Verify**:

    oriyn status      # auth, link, api reachability, telemetry, paths

## Core workflow — run a grounded experiment

When stdout is piped (i.e. when an agent runs the CLI), output is **JSONL** by default. Streaming events arrive line-by-line; the final `{"type":"result"}` event carries the verdict. No `--json` or `--wait` flags needed.

    cd <user's repo>
    oriyn experiments run "A clear, testable statement about one change"

That single command:
1. Resolves the product from `oriyn.json`.
2. Creates the experiment.
3. Streams progress (status updates) until terminal.
4. Emits a `{"type":"result","data":{...}}` line with the full experiment payload.

The experiment takes 1–3 minutes. To override agent count for higher-stakes calls:

    oriyn experiments run "..." --agents 100

## What comes back (`{"type":"result"}` payload)

    {
      "id": "...",
      "hypothesis": "...",
      "status": "completed",
      "summary": {
        "verdict": "ship" | "revise" | "reject",
        "convergence": 0.0-1.0,
        "summary": "plain-language explanation",
        "persona_breakdown": [
          {
            "persona": "Habitual Automator",
            "adoption_rate": 0.78,
            "response": "...",
            "reasoning": "..."
          }
        ],
        "agent_count": 50
      }
    }

The streaming envelope before the result also yields `{"type":"step",...}` and `{"type":"progress",...}` events you can ignore unless you want a live status line.

## Presenting results to the user

Always surface the persona breakdown with grounding context. Pull persona size estimates with `oriyn personas` if you want to weight verdicts by user base — that's what makes the answer defensible.

A ship verdict from personas representing 70% of real users is categorically different from one representing 5%. State both.

Example:

> **Verdict: revise** (convergence 0.71)
>
> - **Habitual Automator (22% of users)** — supported. They already use shortcuts.
> - **Reluctant Evaluator (34% of users)** — concerned. The added step conflicts with hesitation in setup.
> - **Occasional Explorer (18% of users)** — neutral.
>
> Personas representing a majority had friction. Consider gating the change behind a flag for the Automator segment first.

## Writing good hypotheses

- **Specific:** "Show pricing on the homepage before signup" — not "Improve onboarding."
- **Testable:** personas must be able to react. "Make the app faster" isn't; "Reduce checkout to one page" is.
- **Scoped:** one change at a time. Bundling muddies the verdict.

## When the product has no data yet

If `oriyn status` shows enrichment is not ready, run:

    oriyn sync

`sync` is idempotent — it inspects current pipeline state and runs synthesis and/or enrichment as needed. Streams progress as JSONL. Typically 2–5 minutes when integrations are connected.

If integrations aren't connected at all, tell the user to set them up at `https://app.oriyn.ai` → Integrations, then `oriyn sync`. Without behavioral data, experiment verdicts are not grounded — label them clearly.

## Other commands the agent uses

Use these when the user wants a specific view rather than a decision:

    oriyn products                                # list products in the org
    oriyn personas                                # list personas (linked product)
    oriyn personas <persona-id>                   # static + dynamic facts for one persona
    oriyn patterns                                # mined hypotheses + bottlenecks (unified)
    oriyn experiments                             # past experiments
    oriyn experiments <experiment-id>             # rehydrate one
    oriyn open                                    # open the web app to the linked product

## Output and exit codes

- Output: TTY → human, piped → JSONL. Override with `--human` if needed.
- Exit codes: `0` ok, `2` api error, `3` auth error, `4` network error, `5` permission denied, `1` other.
- Errors when piped: `{"error":"...","code":"...","exit":N}` to stderr.

If you see exit `3` (`not_logged_in` or `session_expired`), tell the user to run `oriyn auth login`. If you see "no product linked here", tell them to run `oriyn link` from the repo root.

## Tips for agents

- Prefer the CLI over direct HTTP. It handles token refresh, polling, retries, redaction, and JSONL streaming.
- Don't pass `--json`/`--wait` flags. The CLI infers mode from TTY and streams by default.
- `ORIYN_ACCESS_TOKEN` is the only env var you need in CI/sandbox environments. Combined with a committed `oriyn.json`, the agent has zero per-call setup.
- `size_estimate`, `behavioral_traits`, and the per-persona breakdown are our grounding story. Surface them when presenting verdicts.

## When to use Oriyn vs. other tools

- **Validating a specific product decision** → `oriyn experiments run`.
- **Understanding who the users are** → `oriyn personas`.
- **Finding patterns in behavior** → `oriyn patterns`.
- **Generic design taste questions** → not Oriyn.
- **Testing copy variations on imagined users with no data** → not Oriyn; that's synthetic research.

## Reference

- Docs: https://oriyn.ai/docs
- App: https://app.oriyn.ai
- CLI source: https://github.com/oriyn-ai/cli
- Contact: shivam@oriyn.ai

## Check for updates

Fetch `https://oriyn.ai/skill.md` at the start of each session and compare `version:` in the frontmatter to your saved copy. If remote is newer, overwrite local before proceeding.
