Privora 泊睿 User Guide

Your cloud-native data workstation and quantitative monitoring powerhouse. Build automated data pipelines or run quantitative strategies with built-in A/H stock data — no tedious low-level coding required.

Getting Started

  • Registration & Survey: After your first registration and login, the system will guide you through a Welcome Survey (role, purpose, etc.). Once completed, an 8-step Product Tour will automatically play to help you quickly familiarize with the sidebar layout.
  • Re-trigger Tour: If you skipped the tour, you can restart it anytime by clicking "Feature Tour" at the bottom of the left navigation bar.
  • Language Switch: Click "Language" in the sidebar to switch the system language in real-time.

Insight Studio (Dashboards)

Turn raw data into real-time dashboards and metric alerts — your boss will love it.

1. Build Dashboards & Global Variables

Insight Studio -> Dashboard Builder -> [Create Panel]
  • Data Binding: Bindtime-series charts and tables directly to registered assets from the Asset Catalog, or select "Custom SQL Query" to write raw SQL.
  • Linked Filtering: Add global dropdown variables (e.g. stock_code), reference them in chart SQL via ${stock_code}. Switch the dropdown and all charts refresh instantly.

2. Metric Alerts

  • In a widget's advanced config, click "Add Alert Rule".
  • Set logic & threshold (e.g. Sum > 1000), bind a pre-configured Webhook bot from your Data Sources.
  • Set a Silence Period to prevent alert storms from repeated notifications.

Asset Studio (Data Foundation)

Standardize messy databases and APIs into platform-consumable "Data Assets".

1. Configure Data Source Connections

Asset Studio -> Data Source Connections -> [Add Data Source]
  • Category: Select system type — Database, API, Webhook (for alert notifications like Feishu/DingTalk bots), etc.
  • Authentication: Fill in credentials for databases; configure Token or Sign Secret for API/Webhook.
  • Server Type: Choose Production, Development, or Test to achieve physical multi-environment isolation.

2. Register & Manage Data Assets

Asset Studio -> Asset Catalog -> [Edit / Add Data Asset]
  • Sensitivity Control: Public — visible to other teams. Internal — private to your team.
Internal assets must include a permission_field tag (e.g. permission_field:tenant_id) to be published externally. The system will auto-apply Row-Level Security (RLS).
  • Data Profile: In the detail page, click "Data Profile" to auto-generate null rates, min/max values, and distribution charts for each column.

Process Studio (Data Factory)

No need to write lengthy scripts — drag, drop, and SQL your way to data cleansing and transformation pipelines.

1. Process Diagram Canvas

  • Drag & Configure Nodes: Drag Database, API, Transform, Filter nodes from the left panel onto the grid. Double-click to configure. For SQL nodes, write logic and click "Format SQL" to beautify. Supports Retry Times on failure.
  • Connection Control: Drag from a node's right port to the next node. Double-click a line to set trigger conditions (On Success for normal flow, On Failure for fallback compensation).
  • Version Snapshots: The system auto-saves snapshots of every modification. Supports version diff "Compare" and one-click "Restore" rollback.
  • Agent / API Updates: Agents and scripts can update an existing pipeline via PUT /api/ingestions/{id} (scope process.pipeline.update). Omit nodes to rename only; send nodes=[...] to fully replace steps. Every PUT writes a version snapshot — any mistake can be rolled back from the Versions tab. Legacy rows with an empty team are rejected with 403 until backfilled.

2. Built-in Components

The left panel of the diagram editor provides various components. Drag them onto the canvas and double-click to configure. Each component has a "Guide" tab with detailed usage instructions.

Python Script (python_script)

Run custom Python code with the built-in lg_utils library (no installation required):

  • get_context() — View your team's available assets, datasources, dashboards, and processes
  • get_asset_data("name", filter_column=..., filter_value=[...], filter_operator="eq") — Fetch asset data. Pass a list to filter_value + filter_operator="eq" to do a single IN query across many symbols (e.g. stock_num IN (601985,600050,002085))
  • get_portfolio_positions(stock_num=None) — Read your team's current holdings; each row carries the latest Process recommendation (Action / Add1,2 / Reduce1,2)
  • get_trading_records(account_id=None, market=None, stock_num=None, trade_type=None, page=1, size=50) — Read your team's trading records, paginated. Filter optionally by accountId, market, stockNum, tradeType. Use for swing-monitor anchors (latest BUY/SELL) or transaction-record-driven backtests inside Process.
  • write_recommendations([{...}]) — Append per-stock recommendation rows (history is preserved). The holdings page exposes the history via a per-row "Rec history" button — paginated, newest first.
  • get_connection("ds_name") — Connect to team datasources (auto-resolves config; supports PostgreSQL/MySQL/Oracle/SQL Server)
  • get_variable("key") — Read scheduling context variables (job name, batch number, etc.)
  • put_variable("key", value) — Write a variable back into the pipeline context so downstream steps can reference it via ${key} (good for log summaries, counts, small JSON values; ≤ 64 KB per value)
  • log.info() / warn() / error() — Structured log output, displayed in real-time in execution logs
  • backtest(..., persist=True, persist_name="...") — Run a historical backtest and persist the result to My Backtests in one call; multiple runs with different persist_name values can be compared side-by-side on Sharpe, max drawdown, and total return.
  • result.persist(name="...") — Manually persist an existing backtest result to My Backtests — useful for recording a result outside of a backtest() call, or backfilling.

SQL Execution (sql)

Execute SQL statements on a specified datasource. Supports multiple statements separated by semicolons, variable substitution ${variable}, and row_count tracking.

Fetch Asset Data (fetchAssetData)

Pull paginated rows for a registered asset into a pipeline variable — downstream Python / SQL / Loop nodes read via ${varName.data}. Supports single- and multi-value filters (e.g. stock_num="601985,600050" → IN query). Team permissions are enforced automatically; no credentials are exposed to the step.

Fetch Team Holdings (fetchPortfolioPositions)

Load your team's current portfolio into a pipeline variable, each row enriched with the latest Process recommendation. Typical pattern: fetchPortfolioPositions → pythonScript that scores each holding → write_recommendations() to push the result back to the holdings page.

Conditional Branch (if)

Route the flow to different branches based on condition expressions. Supports comparison operators (>, <, ==), logical operators (&&, ||), and an else default branch.

Loop (for)

Loop over child steps. Three modes: counter loop (i=0;i<10;i++), SQL cursor loop (row in SELECT ...#datasource), file line loop (line open path).

Variable Assignment (var)

Set or compute variable values. Supports strings, numbers, JSON, SQL query results, and math expressions. Enable setGlobal to write variables to the global context.

HTTP Call (callService)

Call external REST APIs. Supports GET/POST/PUT/DELETE with custom headers. Response available via ${logInfo} in subsequent steps.

Send Email (sendMail)

Send email notifications via SMTP. Supports HTML/plain text, multiple recipients/CC, and variable substitution. Ideal for completion notifications and alerts.

Tip: Double-click any component, then switch to the "Guide" tab for complete parameter reference and code examples.

3.5 Process Runtime — Backtest API

Use historical market data to simulate and measure a strategy's performance, entirely inside a python_script step — no extra infrastructure required.

1. Quick Start (3 steps)

  1. In the <a href="/processes" style="color:#93c5fd">Process List</a>, create a new process.
  2. Drag in a <code>python_script</code> node and write your strategy (see the example below).
  3. Run the process; results are saved to My Backtests.

2. Minimal Single-Stock Example (stock_day)

Uses the stock_day asset (built-in A/H share daily bars). Column mapping: date_column="day_id", price_columns={open: "OPEN_PRICE", close: "CLOSE_PRICE"}, filter_column="STOCK_NUM".

from lg_utils import get_variable
from lg_utils.backtest_examples.stock_day import run_stock_day_backtest

def my_strategy(bar, ctx):
    if len(ctx.history) < 20:
        return
    ma20 = sum(b.close for b in ctx.history[-20:]) / 20
    if bar.close > ma20 and ctx.position == 0:
        ctx.buy(size="all")
    elif bar.close < ma20 and ctx.position > 0:
        ctx.sell(size="all")

result = run_stock_day_backtest(
    strategy=my_strategy,
    stock_num=get_variable("stock_num", "000001"),
    start=get_variable("start_date"),   # 'YYYYMMDD' or 'YYYY-MM-DD'
    end=get_variable("end_date"),
    initial_cash=1_000_000,
    commission_bps=3,
    slippage_bps=1,
)
print(result.summary())
result.export_to_context("run1")   # snapshot to run log
result.persist(name="run1")        # save to My Backtests

3. Full backtest() Signature (24 parameters)

All parameters with defaults — pass only what differs from the defaults:

  • strategy — Callable <code>fn(bar, ctx)</code>, or an object with <code>on_bar(bar, ctx)</code>. Optional hooks: <code>on_start(ctx)</code> / <code>on_end(ctx)</code>.
  • asset — Asset ID (int) or asset name (str) — passed to <code>get_asset_data</code>.
  • start, end — Date strings (closed interval) used to slice bars. <code>None</code> = no clipping. Both 'YYYYMMDD' and 'YYYY-MM-DD' are auto-normalized.
  • initial_cash — Starting cash. Default: <code>1_000_000.0</code>.
  • commission_bps — Commission in basis points (1 bp = 1/10 000). Default: <code>0.0</code>.
  • slippage_bps — Slippage in basis points. Default: <code>0.0</code>.
  • fill — <code>"next_open"</code> (default) — fills at the open of the next bar. <code>"this_close"</code> — fills at the current bar's close.
  • date_column — Column name that holds the bar date. Default: <code>"trade_date"</code>. For stock_day use <code>"day_id"</code>.
  • price_columns — Dict mapping logical names to actual column names, e.g. <code>{"open": "OPEN_PRICE", "close": "CLOSE_PRICE"}</code>. Defaults: open/high/low/close/volume.
  • filter_column, filter_value — Server-side filter pushed to <code>get_asset_data</code>. Use for multi-symbol tables (e.g. <code>filter_column="STOCK_NUM", filter_value="000001"</code>).
  • warmup_bars — First N bars are fed to <code>ctx.history</code> but the strategy callback is not called. Default: <code>0</code>.
  • max_bars — Hard cap on bars loaded to avoid runaway fetches. Default: <code>1_000_000</code>.
  • max_history — Max length of <code>ctx.history</code>. <code>None</code> = unlimited.
  • on_trade — Callback <code>fn(trade_dict)</code> fired on each completed round-trip.
  • benchmark_asset — Optional asset name/ID for benchmark comparison. Produces <code>benchmark_return</code>, <code>alpha</code>, <code>beta</code> in metrics.
  • benchmark_price_column — Benchmark close column. Defaults to the same as <code>price_columns["close"]</code>.
  • benchmark_filter_column, benchmark_filter_value — Server-side filter for the benchmark asset (same semantics as filter_column / filter_value).
  • persist — If <code>True</code>, calls <code>result.persist(name=persist_name)</code> automatically at the end. Default: <code>False</code>.
  • persist_name — Label stored when <code>persist=True</code>; same as the <code>name</code> arg of <code>BacktestResult.persist()</code>.
  • bars — Bypass <code>get_asset_data</code> and supply bars directly as a list of dicts (useful for unit tests or custom data sources).

4. Runtime Objects: Bar / Context / BacktestResult

Bar — Named-tuple passed to the strategy callback each tick.

  • bar.dt — Bar date string (same value as the raw <code>date_column</code> field).
  • bar.open, bar.high, bar.low, bar.close, bar.volume — Resolved numeric prices and volume. <code>None</code> if the column is absent in the asset.
  • bar.raw — Original raw row dict from the data source — useful for accessing non-price columns.

Context — Strategy runtime context — holds account state and order submission methods.

  • ctx.position — Current number of shares held (integer, long-only).
  • ctx.cash — Available cash.
  • ctx.equity — Total portfolio value: cash + position × current bar close.
  • ctx.nav — Net asset value relative to initial cash (equity / initial_cash).
  • ctx.history — List of all <code>Bar</code> objects seen so far (capped at <code>max_history</code>).
  • ctx.buy(size="all", limit_price=None) — <code>size</code>: <code>"all"</code> (use all available cash), float ∈ (0,1] (fraction of cash), or positive int (share count). <code>limit_price</code> acts as a cap — order skipped if fill price exceeds it.
  • ctx.sell(size="all", limit_price=None) — <code>size</code>: <code>"all"</code> (sell full position), float ∈ (0,1] (fraction of position), or positive int. <code>limit_price</code> acts as a floor.
  • ctx.close_all() — Convenience: sell full position if any is held.
  • ctx.order_target_pct(pct) — Adjust holding to <code>pct × equity</code> worth of shares. <code>pct ∈ [0, 1]</code>.

BacktestResult — Returned by <code>backtest()</code>. Contains all outcome data.

  • result.metrics — Dict with total_return, cagr, sharpe, sortino, max_drawdown, win_rate, profit_factor, num_trades, exposure, and (if benchmark provided) benchmark_return, alpha, beta.
  • result.trades — List of round-trip trade dicts: entry_dt, exit_dt, qty, entry_px, exit_px, pnl, return_bps.
  • result.equity_curve — List of per-bar account snapshots (see §7 for schema).
  • result.summary() — Returns a formatted multi-line string of all key metrics — useful for print() in the run log.
  • result.export_to_context(name) — Writes a sentinel line to stdout so PythonScriptStep captures it in the job log.
  • result.persist(name=...) — Persists the result to the <code>process_backtest_result</code> table (append-only, team-isolated). Requires running inside PythonScriptStep.

5. Portfolio Backtest (backtest_portfolio)

Runs multiple assets against a shared cash pool. Use <code>run_stock_day_portfolio_backtest</code> for the built-in stock_day asset. <code>result.metrics["per_asset"]</code> contains per-symbol return, max_drawdown, num_trades, and contribution. The <code>assets</code> / <code>stock_nums</code> list order determines which strategy gets to fill first when multiple size='all' orders land on the same bar.

from lg_utils.backtest_examples.stock_day import run_stock_day_portfolio_backtest

def make_ma_strategy(fast, slow):
    def strategy(bar, ctx):
        if len(ctx.history) < slow:
            return
        ma_fast = sum(b.close for b in ctx.history[-fast:]) / fast
        ma_slow = sum(b.close for b in ctx.history[-slow:]) / slow
        if ma_fast > ma_slow and ctx.position == 0:
            ctx.buy(size=0.5)   # use 50% of available cash
        elif ma_fast < ma_slow and ctx.position > 0:
            ctx.sell(size="all")
    return strategy

result = run_stock_day_portfolio_backtest(
    strategies={
        "000001": make_ma_strategy(5, 20),
        "600519": make_ma_strategy(10, 30),
    },
    stock_nums=["000001", "600519"],  # settlement order for size='all'
    start="20240101", end="20241231",
    initial_cash=1_000_000,
    commission_bps=3,
)
print(result.summary())
# result.metrics["per_asset"] has per-stock contribution / max_dd
result.persist(name="portfolio-v1")

6. Transaction-Record-Driven Backtest

Use <code>get_trading_records()</code> to read real BUY/SELL anchors from your trading history, then replay those entry dates in the backtest engine to measure what the outcome would have been.

from lg_utils import get_trading_records
from lg_utils.backtest_examples.stock_day import run_stock_day_backtest

# Load actual BUY/SELL anchors from trading history
records = get_trading_records(stock_num="000001", trade_type="BUY", size=1)
last_buy_date = records[0]["trade_date"] if records else "20240101"

def replay_strategy(bar, ctx):
    # Re-enter at the same date the real BUY happened
    if bar.dt >= last_buy_date and ctx.position == 0:
        ctx.buy(size="all")
    elif ctx.position > 0:
        # exit after 20 bars
        if len(ctx.history) - ctx.history.index(
            next(b for b in ctx.history if b.dt >= last_buy_date)
        ) >= 20:
            ctx.sell(size="all")

result = run_stock_day_backtest(
    strategy=replay_strategy,
    stock_num="000001",
    start=last_buy_date,
    end="20241231",
    initial_cash=500_000,
)
result.persist(name="replay-from-records")

7. equity_curve JSON Schema

Each element in <code>result.equity_curve</code> corresponds to one bar:

// equity_curve: list of objects, one per bar
[
  {
    "dt":       "20240101",   // bar date (string, same format as date_column)
    "equity":   1_000_000.0, // total portfolio value (cash + position mark)
    "cash":     800_000.0,   // available cash
    "position": 100,         // shares held (int; portfolio mode: count of non-zero positions)
    "close":    55.80        // bar close price (portfolio mode: weighted mark equity)
  },
  ...
]

The UI at My Backtests renders this curve as a line chart (equity over time) and uses equity to compute drawdown. The cash and position fields are shown in the detail panel.

8. The __LG_BACKTEST_RESULT__ Sentinel

This line appears in the execution log automatically when you call result.export_to_context(). Do NOT copy it back into your Python code — it is machine output, not source code.

Two ways to surface results after a run:

// Emitted by result.export_to_context("name") — appears in the run log:
__LG_BACKTEST_RESULT__:<name>:<json-payload>

// Emitted by result.persist(...) — writes to process_backtest_result table:
// Returns the new row id. Visible at /profile/backtest-results.

9. Where Results Appear After Execution

Results saved with result.persist() or backtest(..., persist=True) are visible at Profile → My Backtests. The panel shows a sortable table of all your named runs with key metrics; click any row to open the equity-curve chart and trade log.

10. CLI Parameter Injection (get_variable)

The backend passes scheduling variables to the script as command-line flags. Both <code>--start_date</code> and <code>-start_date</code> are equivalent — the runtime strips all leading dashes before exposing the value via <code>get_variable("start_date")</code>.

# Both forms are equivalent — the runtime strips leading dashes:
#   --start_date 2024-01-01
#    -start_date 2024-01-01
# Both surface as:
from lg_utils import get_variable
start = get_variable("start_date")   # => "2024-01-01"
Recommended workflow: Keep your live recommendation Process separate from your backtest Process. Backtesting in-place (adding a backtest() call to a production Process) risks persisting incomplete results or blocking live runs. Create a dedicated backtest Process, parameterize start_date / end_date / stock_num via get_variable(), and schedule it independently.

Schedule Studio (Automation Engine)

Replace local Cron — run data pipelines, quant strategies, or automation scripts on schedule in the cloud, with alerts on failure.

1. Configure Scheduled Jobs

Schedule Studio -> Job List -> [Add Job]
  • Bind the Process or script to execute. Use the built-in "Cron Expression Builder" to quickly generate timing strategies (e.g. daily at 2 AM). Configure Dependencies to ensure downstream triggers only after upstream success.

2. Instance Monitoring & Intervention

  • Turn on "Auto Refresh: ON" in the top-right corner to use as a real-time monitoring dashboard — track Pending/Running/Success/Failed status live.
  • Manual Intervention: View Logs (pull real execution logs), Kill (force-kill stuck tasks), Redo Job (one-click re-run after fixing logic), View Lineage (3-level dependency graph).

Stock Studio (Quantitative Research)

Say goodbye to expensive third-party market data APIs. Build and host your private quantitative monitoring engine in the cloud.
This module is an industry-specific extension. Contact your admin to authorize access via Admin Studio.
Built-in High-Frequency Data: The platform includes A-share and H-share real-time quotes and minute bars. No need to purchase TuShare, JoinQuant, or other expensive third-party accounts, nor maintain a heavy local historical database.
Automated P/L Calculation: Enter baseline data (cost price, quantity) and daily BUY/SELL transactions. The system auto-calculates real-time average cost and Unrealized P/L based on latest quotes. Supports safe rollback by deleting the last erroneous transaction.
Process Recommendations on Holdings: A Python step in any Process can call write_recommendations() to push per-stock signals (Action / Add1, Add2 / Reduce1, Reduce2 / no_more_add). Every call is an append — history is preserved. On the Holdings page each row has a "Rec history" button that opens a paginated, newest-first history modal for that stock. Pair with Schedule Studio to get a daily post-close signal feed.
AI Monitoring & Push: Combined with Schedule Studio and the official LLM plugin, easily achieve "price breakout alert -> Feishu/WeChat millisecond-level push". You can even ask the Agent: "Check my portfolio P/L for today."

Marketplace & API Consumption

Break data silos. Provide dead-simple data retrieval APIs, perfectly suited for automation scripts and AI LLMs.

1. Marketplace

  • Browse all published data assets and dashboards across the platform. Click "Subscribe" to add them to your available permission pool.

2. Token Management

Profile Settings -> Token Management -> [Create Token]
  • Key Scenario: Configure the generated Token directly in the official OpenClaw Agent plugin, or pass it to your own Python script, enabling fully automated data retrieval without login.
  • Security: Fine-grained permission control (Scopes). Token is shown in plaintext only once at creation. If leaked, immediately click "Revoke" to block access.

LLM & Intelligent Plugins (Agent Skills)

Let AI be your 24/7 data assistant. Pull reports, monitor stocks, and submit bugs through natural language conversation.
常用端点GET /agent/skills(列出可用 skill)   POST /agent/skills/execute(执行 skill)   GET /api/public/agent/token-introspect(连通性 smoke test)
三步快速接入
  1. /profile/tokens 创建 Bearer token,选择需要的 scope
  2. 设置环境变量 LG_AGENT_TOKEN=<your-token>LG_AGENT_BASE_URL=https://lg-data.cc
  3. GET /agent/skills 拿可用 skill 列表,再 POST /agent/skills/execute 执行

完整 skill catalog 见 SKILL.md(含每个 skill 的 risk tier、必填字段、gotchas)。

注意:在 token 模式下,所有 risk 标记为 🔴 或 confirmRequired=true 的操作返回 409;详见 SKILL.md 该操作的元数据。例如 schedule.job.delete(🔴)和 schedule.instance.kill(🔴)在 token 模式下均返回 409。Approval flow 仅 session 支持。