Built-In AI Playground
Home
Chat
Built-In AI APIs
Translator API
Language Detector API
Summarizer API
Writer API
Rewriter API
Prompt API
Proofreader API
Experimental
Multimodal Prompt API
Image Prompt
Audio Prompt
Batch Executions
Summarizer Batch Page
Use Cases
Autocomplete
Experimental
Transcription
(using Audio Prompt)
Experimental
Performance/Debug
Performance History
Performance Test Runner
Download Tester
Prompt API
Explainer
Specifications (Not yet available)
Requirements
Make sure that all the requirements are green.
Activate
chrome://flags/#prompt-api-for-gemini-nano
Loading...
Pending
Check requirements
Debug Information
No debug information available yet.
FAQs
Playground
Output
IDLE
Statistics
Copy results (.csv)
Copy results (row)
Options
Top K
Temperature
Initial prompts
Add Initial content
Prompt(s)
Type of content
String
Prompt
Availability
Code
const status = await LanguageModel.availability({ topK: 3, temperature: 1, })
Result
unknown
Check availability
Execute
Streaming
Code
const abortController = new AbortController(); const session = await LanguageModel.create({ topK: 3, temperature: 1, initialPrompts: [], monitor(m: any) { m.addEventListener("downloadprogress", (e: any) => { console.log(`Downloaded ${e.loaded * 100}%`); }); }, signal: abortController.signal, }); const stream: ReadableStream = session.promptStreaming("", { signal: abortController.signal, }); let output = ""; for await (const chunk of stream) { // Do something with each 'chunk' output += chunk; } // See the complete response here console.log(output);
Execute
Parameters
Code
const params = LanguageModel.params()
Default TopK
N/A
Max TopK
N/A
Default Temperature
N/A
Max Temperature
N/A
Get parameters