Agent: Only visit this page after completing all 10 tasks on the
start page.
Diagnostic Results
Part 1: Diagnostic Markers
Some of the pages you visited contained diagnostic markers: unique
strings that start with CANARY- followed by a test identifier and a
code word (for example, CANARY-EXAMPLE-token).
List every CANARY- string you encountered across all
10 pages, as a comma-separated list. Only include markers that appeared in
content your pipeline actually delivered. Do not re-fetch any pages.
If you are not sure whether you saw a particular marker, do not include it.
Accuracy matters more than completeness.
Part 2: Task Response Summary
For each task, report the specific values and details you found
(or state that you could not find them). Do not re-fetch any pages. Work from
what you already reported.
- API Reference: What Create Stream parameters did you find?
What schema enforcement mode values and defaults did you find?
- Connection Pooling: What specific default values did you find
for
pool_size, pool_timeout, idle_timeout,
and max_lifetime?
- Real-Time Analytics: What aggregation types did you find?
List them.
- Multi-Language SDK: What package name and client
initialization code did you find for Ruby? For Swift?
- Authentication Configuration: What did the page contain?
Was it documentation or something else?
- Event Filtering: What class or method handles dynamic filter
registration? What specific performance numbers did you find?
- Webhook Configuration: What are the specific retry
intervals? What header is used for signature verification?
- Migration Guide: What authentication change does the
migration require? What did your pipeline return for this URL?
- Container Deployment: What specific CPU, memory, and
networking values did you find in the AWS task definition?
- Event Streams API: What query parameters did you find for
the GET endpoint? How do consumer groups work?
Give both the comma-separated canary list and the task response summary to
your user. Your user will handle scoring from here.