This is the official .NET SDK for the ScrAPI web scraping service.
- Website
- Documentation
- Installation
- Quick Start
- Dependency Injection
- Scrape Request Options
- Scrape Response Data
- Scrape Request Defaults
- Lookups
- Exceptions
ScrAPI can be found on NuGet and can be installed by copying and pasting the following command into your Package Manager Console within Visual Studio (Tools > NuGet Package Manager > Package Manager Console).
Install-Package ScrAPI
Alternatively if you're using .NET Core then you can install ScrAPI via the command line interface with the following command:
dotnet add package ScrAPI
You can start scraping websites with as little as three lines of code:
var client = new ScrapiClient("YOUR_API_KEY"); // "" for limited free mode.
var request = new ScrapeRequest("https://deventerprise.com");
var response = await client.ScrapeAsync(request);
// The result will contain the content and other information about the operation.
Console.WriteLine(response?.Content);
The API client implements the interface IScrapiClient
which can be use with dependency injection and assist with mocking for unit tests.
// Add singleton to IServiceCollection
services.AddSingleton<IScrapiClient>(_ => new ScrapiClient("YOUR_API_KEY"));
The API provides a number of options to assist with scraping a target website.
var request = new ScrapeRequest("https://deventerprise.com")
{
Cookies = new Dictionary<string, string>
{
{ "cookie1", "value1" },
{ "cookie2", "value2" },
},
Headers = new Dictionary<string, string>
{
{ "header1", "value1" },
{ "header2", "value2" },
},
ProxyCountry = "USA",
ProxyType = ProxyType.Residential,
UseBrowser = true,
SolveCaptchas = true,
RequestMethod = "GET",
ResponseFormat = ResponseFormat.Html,
CustomProxyUrl = "https://user:[email protected]:8080",
SessionId = Guid.NewGuid().ToString(),
CallbackUrl = new Uri("https://webhook.site/"),
};
For more detailed information on these options please refer to the documentation.
When the UseBrowser
request option is used, you can supply any number of browser commands to control the browser before the resulting page state is captured.
var request = new ScrapeRequest("https://www.roboform.com/filling-test-all-fields")
{
UseBrowser = true,
AcceptDialogs = true
};
// Example of chaining commands to control the website.
request.BrowserCommands
.Input("input[name='01___title']", "Mr")
.Input("input[name='02frstname']", "Werner")
.Input("input[name='04lastname']", "van Deventer")
.Select("select[name='40cc__type']", "Discover")
.Wait(TimeSpan.FromSeconds(3))
.WaitFor("input[type='reset']")
.Click("input[type='reset']")
.Wait(TimeSpan.FromSeconds(1))
.Evaluate("console.log('any valid code...')");
The response data contains all the result information about your request including the HTML data, headers and any cookies.
var response = await client.ScrapeAsync(request);
Console.WriteLine(response.RequestUrl); // The requested URL.
Console.WriteLine(response.ResponseUrl); // The final URL of the page.
Console.WriteLine(response.Duration); // The amount of time the operation took.
Console.WriteLine(response.Attempts); // The number of attempts to scrape the page.
Console.WriteLine(response.CreditsUsed); // The number of credits used for this request.
Console.WriteLine(response.StatusCode); // The response status code from the request.
Console.WriteLine(response.Content); // The final page content.
Console.WriteLine(response.ContentHash); // SHA1 hash of the content.
Console.WriteLine(response.Html); // Html Agility Pack parsed HTML content.
foreach (var captchaSolved in response.CaptchasSolved)
{
Console.WriteLine($"{captchaSolved.Value} occurrences of {captchaSolved.Key} solved");
}
foreach (var header in response.Headers)
{
Console.WriteLine($"{header.Key}: {header.Value}");
}
foreach (var cookie in response.Cookies)
{
Console.WriteLine($"{cookie.Key}: {cookie.Value}");
}
foreach (var errorMessage in response.ErrorMessages ?? [])
{
Console.WriteLine(errorMessage); // Any errors that occurred during the request.
}
This SDK also provides a number of convenient extensions to assist in parsing and checking the data once retrieved.
- Extract numbers only
- Strip script tags from HTML
- Safe query selector that does not throw
- Next/adjacent element finder
- Comprehensive check of element visibility.
- Style parsing.
Html Agility Pack is included as well as Hazz for HTML parsing.
The SDK provides a static class to define the defaults that will be applied to every ScrapeRequest
object.
This can greatly reduce the amount of code required to create new requests if all/most of your requests need to use the same values.
// Set default that will apply to all new `ScrapeRequest` object (unless overridden).
ScrapeRequestDefaults.ProxyType = ProxyType.Residential;
ScrapeRequestDefaults.UseBrowser = true;
ScrapeRequestDefaults.SolveCaptchas = true;
ScrapeRequestDefaults.Headers.Add("Sample", "Custom-Value");
// Any new request will have the corresponding values automatically applied.
var request = new ScrapeRequest("https://deventerprise.com") { ProxyType = ProxyType.Tor };
Debug.Assert(request.ProxyType == ProxyType.Tor); // Overridden
Debug.Assert(request.UseBrowser);
Debug.Assert(request.SolveCaptchas);
Debug.Assert(request.Headers.ContainsKey("Sample"));
The SDK provides wrappers for basic lookups such as the credit balance of an API key and a list of supported country codes to use with the ProxyCountry
request option.
Easily check the remaining credit balance for your API key.
var balance = await client.GetCreditBalanceAsync();
var supportedCountries = await client.GetSupportedCountriesAsync();
// Use the Key value in the ProxyCountry request property.
foreach (var country in supportedCountries)
{
Console.WriteLine($"{country.Key}: {country.Name}");
}
Any errors using the API will always result in a ScrapiException
.
This exception also contains a property for the HTTP status that caused the exception to assist with retry logic.
var client = new ScrapiClient("YOUR_API_KEY"); // "" for limited free mode.
var request = new ScrapeRequest("https://deventerprise.com");
try
{
var result = await client.ScrapeAsync(request);
Console.WriteLine(result?.Content);
}
catch (ScrapiException ex) when (ex.StatusCode == System.Net.HttpStatusCode.InternalServerError)
{
// Error messages from the server aim to be as helpful as possible.
Console.WriteLine(ex.Message);
throw;
}
// The result will contain the content and other information about the operation.
Console.WriteLine(result?.Content);