Skip to content

llm

LLM-powered deep phonetic analysis via Anthropic, OpenAI, compatible gateways, or local agents.

Requires the llm extra: pip install phonemenal[llm]

phonemenal.llm

LLM-powered deep phonetic analysis.

Three modes: 1. API: Direct call to Anthropic or OpenAI 2. Agent: Pipe prompt to a local CLI agent (claude, codex, or custom executable) 3. Gateway: OpenAI-compatible API endpoint (self-hosted models)

The LLM module is complementary to algorithmic scoring — it can serve as a standalone analysis tool or as a tiebreaker when algorithmic scores are ambiguous (mid-range).

get_prompt(word: str) -> str

Get the analysis prompt template filled with the word.

Source code in phonemenal/llm.py
def get_prompt(word: str) -> str:
    """Get the analysis prompt template filled with the word."""
    template = PROMPT_PATH.read_text()
    return template.replace("{word}", word)

analyze(word: str, *, provider: str = 'anthropic', model: Optional[str] = None, agent: Optional[str] = None, gateway_url: Optional[str] = None, timeout: int = 120) -> str

Run deep phonetic analysis via LLM.

Parameters:

Name Type Description Default
word str

Word or phrase to analyze.

required
provider str

"anthropic" or "openai" (for API mode).

'anthropic'
model Optional[str]

Override the default model.

None
agent Optional[str]

Agent name ("claude", "codex") or path to executable (for agent mode).

None
gateway_url Optional[str]

OpenAI-compatible API endpoint URL (for gateway mode).

None
timeout int

Timeout in seconds for agent subprocess.

120

Returns the LLM's analysis as a string.

Mode selection: - If agent is set → agent mode (subprocess) - If gateway_url is set → gateway mode (OpenAI SDK with custom base_url) - Otherwise → API mode (direct SDK call)

Source code in phonemenal/llm.py
def analyze(
    word: str,
    *,
    provider: str = "anthropic",
    model: Optional[str] = None,
    agent: Optional[str] = None,
    gateway_url: Optional[str] = None,
    timeout: int = 120,
) -> str:
    """Run deep phonetic analysis via LLM.

    Args:
        word: Word or phrase to analyze.
        provider: "anthropic" or "openai" (for API mode).
        model: Override the default model.
        agent: Agent name ("claude", "codex") or path to executable (for agent mode).
        gateway_url: OpenAI-compatible API endpoint URL (for gateway mode).
        timeout: Timeout in seconds for agent subprocess.

    Returns the LLM's analysis as a string.

    Mode selection:
    - If agent is set → agent mode (subprocess)
    - If gateway_url is set → gateway mode (OpenAI SDK with custom base_url)
    - Otherwise → API mode (direct SDK call)
    """
    prompt = get_prompt(word)

    if agent:
        return _analyze_via_agent(prompt, agent=agent, timeout=timeout)
    elif gateway_url:
        return _analyze_via_gateway(prompt, gateway_url=gateway_url, model=model)
    else:
        return _analyze_via_api(prompt, provider=provider, model=model)