Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions api-reference/endpoint/smartcrawler/start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ sha256={your_webhook_secret}

To verify that a webhook request is authentic:

1. Retrieve your webhook secret from the [dashboard](https://dashboard.scrapegraphai.com)
1. Retrieve your webhook secret from the [dashboard](https://scrapegraphai.com/dashboard)
2. Compare the `X-Webhook-Signature` header value with `sha256={your_secret}`

<CodeGroup>
Expand Down Expand Up @@ -305,5 +305,5 @@ The webhook POST request contains the following JSON payload:
| result | string | The crawl result data (null if failed) |

<Note>
Make sure to configure your webhook secret in the [dashboard](https://dashboard.scrapegraphai.com) before using webhooks. Each user has a unique webhook secret for secure verification.
Make sure to configure your webhook secret in the [dashboard](https://scrapegraphai.com/dashboard) before using webhooks. Each user has a unique webhook secret for secure verification.
</Note>
18 changes: 9 additions & 9 deletions api-reference/errors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -139,17 +139,17 @@ except APIError as e:
```

```javascript JavaScript
import { smartScraper } from 'scrapegraph-js';
import { scrapegraphai } from 'scrapegraph-js';

const apiKey = 'your-api-key';
const sgai = scrapegraphai({ apiKey: 'your-api-key' });

const response = await smartScraper(apiKey, {
website_url: 'https://example.com',
user_prompt: 'Extract data',
});

if (response.status === 'error') {
console.error('Error:', response.error);
try {
const { data } = await sgai.extract('https://example.com', {
prompt: 'Extract data',
});
console.log('Data:', data);
} catch (error) {
console.error('Error:', error.message);
}
```
</CodeGroup>
Expand Down
2 changes: 1 addition & 1 deletion api-reference/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The ScrapeGraphAI API provides powerful endpoints for AI-powered web scraping an

## Authentication

All API requests require authentication using an API key. You can get your API key from the [dashboard](https://dashboard.scrapegraphai.com).
All API requests require authentication using an API key. You can get your API key from the [dashboard](https://scrapegraphai.com/dashboard).

```bash
SGAI-APIKEY: your-api-key-here
Expand Down
22 changes: 10 additions & 12 deletions cookbook/examples/pagination.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -349,22 +349,20 @@ if __name__ == "__main__":
## JavaScript SDK Example

```javascript
import { smartScraper } from 'scrapegraph-js';
import { scrapegraphai } from 'scrapegraph-js';
import 'dotenv/config';

const apiKey = process.env.SGAI_APIKEY;
const sgai = scrapegraphai({ apiKey: process.env.SGAI_APIKEY });

const response = await smartScraper(apiKey, {
website_url: 'https://www.amazon.in/s?k=tv&crid=1TEF1ZFVLU8R8&sprefix=t%2Caps%2C390&ref=nb_sb_noss_2',
user_prompt: 'Extract all product info including name, price, rating, and image_url',
total_pages: 3,
});
const { data } = await sgai.extract(
'https://www.amazon.in/s?k=tv&crid=1TEF1ZFVLU8R8&sprefix=t%2Caps%2C390&ref=nb_sb_noss_2',
{
prompt: 'Extract all product info including name, price, rating, and image_url',
totalPages: 3,
}
);

if (response.status === 'error') {
console.error('Error:', response.error);
} else {
console.log('Response:', JSON.stringify(response.data, null, 2));
}
console.log('Response:', JSON.stringify(data, null, 2));
```

## Example Output
Expand Down
2 changes: 1 addition & 1 deletion cookbook/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Each example is available in multiple implementations:
4. Experiment and adapt the code for your needs

<Note>
Make sure to have your ScrapeGraphAI API key ready. Get one from the [dashboard](https://dashboard.scrapegraphai.com) if you haven't already.
Make sure to have your ScrapeGraphAI API key ready. Get one from the [dashboard](https://scrapegraphai.com/dashboard) if you haven't already.
</Note>

## Additional Resources
Expand Down
17 changes: 1 addition & 16 deletions dashboard/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,21 +19,6 @@ The ScrapeGraphAI dashboard is your central hub for managing all your web scrapi
- **Last Used**: Timestamp of your most recent API request
- **Quick Actions**: Buttons to start new scraping jobs or access common features

## Usage Analytics

Track your API usage patterns with our detailed analytics view:

<Frame>
<img src="/images/dashboard/dashboard-2.png" alt="Usage Analytics Graph" />
</Frame>

The usage graph provides:
- **Service-specific metrics**: Track usage for SmartScraper, SearchScraper, and Markdownify separately
- **Time-based analysis**: View usage patterns over different time periods
- **Interactive tooltips**: Hover over data points to see detailed information
- **Trend analysis**: Identify usage patterns and optimize your API consumption


## Key Features

- **Usage Statistics**: Monitor your API usage and remaining credits
Expand All @@ -43,7 +28,7 @@ The usage graph provides:

## Getting Started

1. Log in to your [dashboard](https://dashboard.scrapegraphai.com)
1. Log in to your [dashboard](https://scrapegraphai.com/dashboard)
2. View your API key in the settings section
3. Check your available credits
4. Start your first scraping job
Expand Down
35 changes: 16 additions & 19 deletions developer-guides/llm-sdks-and-frameworks/anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,24 +27,23 @@ If using Node < 20, install `dotenv` and add `import 'dotenv/config'` to your co
This example demonstrates a simple workflow: scrape a website and summarize the content using Claude.

```typescript
import { smartScraper } from 'scrapegraph-js';
import { scrapegraphai } from 'scrapegraph-js';
import Anthropic from '@anthropic-ai/sdk';

const apiKey = process.env.SGAI_APIKEY;
const sgai = scrapegraphai({ apiKey: process.env.SGAI_APIKEY });
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

const scrapeResult = await smartScraper(apiKey, {
website_url: 'https://scrapegraphai.com',
user_prompt: 'Extract all content from this page',
const { data } = await sgai.extract('https://scrapegraphai.com', {
prompt: 'Extract all content from this page',
});

console.log('Scraped content length:', JSON.stringify(scrapeResult.data.result).length);
console.log('Scraped content length:', JSON.stringify(data).length);

const message = await anthropic.messages.create({
model: 'claude-haiku-4-5',
max_tokens: 1024,
messages: [
{ role: 'user', content: `Summarize in 100 words: ${JSON.stringify(scrapeResult.data.result)}` }
{ role: 'user', content: `Summarize in 100 words: ${JSON.stringify(data)}` }
]
});

Expand All @@ -56,12 +55,12 @@ console.log('Response:', message);
This example shows how to use Claude's tool use feature to let the model decide when to scrape websites based on user requests.

```typescript
import { smartScraper } from 'scrapegraph-js';
import { scrapegraphai } from 'scrapegraph-js';
import { Anthropic } from '@anthropic-ai/sdk';
import { z } from 'zod';
import { zodToJsonSchema } from 'zod-to-json-schema';

const apiKey = process.env.SGAI_APIKEY;
const sgai = scrapegraphai({ apiKey: process.env.SGAI_APIKEY });
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY
});
Expand Down Expand Up @@ -91,12 +90,11 @@ if (toolUse && toolUse.type === 'tool_use') {
const input = toolUse.input as { url: string };
console.log(`Calling tool: ${toolUse.name} | URL: ${input.url}`);

const result = await smartScraper(apiKey, {
website_url: input.url,
user_prompt: 'Extract all content from this page',
const { data } = await sgai.extract(input.url, {
prompt: 'Extract all content from this page',
});

console.log(`Scraped content preview: ${JSON.stringify(result.data.result)?.substring(0, 300)}...`);
console.log(`Scraped content preview: ${JSON.stringify(data)?.substring(0, 300)}...`);
// Continue with the conversation or process the scraped content as needed
}
```
Expand All @@ -106,11 +104,11 @@ if (toolUse && toolUse.type === 'tool_use') {
This example demonstrates how to use Claude to extract structured data from scraped website content.

```typescript
import { smartScraper } from 'scrapegraph-js';
import { scrapegraphai } from 'scrapegraph-js';
import Anthropic from '@anthropic-ai/sdk';
import { z } from 'zod';

const apiKey = process.env.SGAI_APIKEY;
const sgai = scrapegraphai({ apiKey: process.env.SGAI_APIKEY });
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

const CompanyInfoSchema = z.object({
Expand All @@ -119,9 +117,8 @@ const CompanyInfoSchema = z.object({
description: z.string().optional()
});

const scrapeResult = await smartScraper(apiKey, {
website_url: 'https://stripe.com',
user_prompt: 'Extract all content from this page',
const { data } = await sgai.extract('https://stripe.com', {
prompt: 'Extract all content from this page',
});

const prompt = `Extract company information from this website content.
Expand All @@ -135,7 +132,7 @@ Output ONLY valid JSON in this exact format (no markdown, no explanation):
}

Website content:
${JSON.stringify(scrapeResult.data.result)}`;
${JSON.stringify(data)}`;

const message = await anthropic.messages.create({
model: 'claude-haiku-4-5',
Expand Down
39 changes: 18 additions & 21 deletions developer-guides/llm-sdks-and-frameworks/gemini.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,22 +27,21 @@ If using Node < 20, install `dotenv` and add `import 'dotenv/config'` to your co
This example demonstrates a simple workflow: scrape a website and summarize the content using Gemini.

```typescript
import { smartScraper } from 'scrapegraph-js';
import { scrapegraphai } from 'scrapegraph-js';
import { GoogleGenAI } from '@google/genai';

const apiKey = process.env.SGAI_APIKEY;
const sgai = scrapegraphai({ apiKey: process.env.SGAI_APIKEY });
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const scrapeResult = await smartScraper(apiKey, {
website_url: 'https://scrapegraphai.com',
user_prompt: 'Extract all content from this page',
const { data } = await sgai.extract('https://scrapegraphai.com', {
prompt: 'Extract all content from this page',
});

console.log('Scraped content length:', JSON.stringify(scrapeResult.data.result).length);
console.log('Scraped content length:', JSON.stringify(data).length);

const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: `Summarize: ${JSON.stringify(scrapeResult.data.result)}`,
contents: `Summarize: ${JSON.stringify(data)}`,
});

console.log('Summary:', response.text);
Expand All @@ -53,26 +52,25 @@ console.log('Summary:', response.text);
This example shows how to analyze website content using Gemini's multi-turn conversation capabilities.

```typescript
import { smartScraper } from 'scrapegraph-js';
import { scrapegraphai } from 'scrapegraph-js';
import { GoogleGenAI } from '@google/genai';

const apiKey = process.env.SGAI_APIKEY;
const sgai = scrapegraphai({ apiKey: process.env.SGAI_APIKEY });
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const scrapeResult = await smartScraper(apiKey, {
website_url: 'https://news.ycombinator.com/',
user_prompt: 'Extract all content from this page',
const { data } = await sgai.extract('https://news.ycombinator.com/', {
prompt: 'Extract all content from this page',
});

console.log('Scraped content length:', JSON.stringify(scrapeResult.data.result).length);
console.log('Scraped content length:', JSON.stringify(data).length);

const chat = ai.chats.create({
model: 'gemini-2.5-flash'
});

// Ask for the top 3 stories on Hacker News
const result1 = await chat.sendMessage({
message: `Based on this website content from Hacker News, what are the top 3 stories right now?\n\n${JSON.stringify(scrapeResult.data.result)}`
message: `Based on this website content from Hacker News, what are the top 3 stories right now?\n\n${JSON.stringify(data)}`
});
console.log('Top 3 Stories:', result1.text);

Expand All @@ -88,22 +86,21 @@ console.log('4th and 5th Stories:', result2.text);
This example demonstrates how to extract structured data using Gemini's JSON mode from scraped website content.

```typescript
import { smartScraper } from 'scrapegraph-js';
import { scrapegraphai } from 'scrapegraph-js';
import { GoogleGenAI, Type } from '@google/genai';

const apiKey = process.env.SGAI_APIKEY;
const sgai = scrapegraphai({ apiKey: process.env.SGAI_APIKEY });
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const scrapeResult = await smartScraper(apiKey, {
website_url: 'https://stripe.com',
user_prompt: 'Extract all content from this page',
const { data } = await sgai.extract('https://stripe.com', {
prompt: 'Extract all content from this page',
});

console.log('Scraped content length:', JSON.stringify(scrapeResult.data.result).length);
console.log('Scraped content length:', JSON.stringify(data).length);

const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: `Extract company information: ${JSON.stringify(scrapeResult.data.result)}`,
contents: `Extract company information: ${JSON.stringify(data)}`,
config: {
responseMimeType: 'application/json',
responseSchema: {
Expand Down
Loading