urlwatch-cookbook - Advanced topics and recipes for urlwatch

Quickly adding new URLs to the job list from the command line:

urlwatch --add url=http://example.org,name=Example

You can also specify an external diff-style tool (a tool that takes two filenames (old, new) as parameter and returns on its standard output the difference of the files), for example to use wdiff(1) to get word-based differences instead of line-based difference, or pandiff https://github.com/davidar/pandiff to get markdown differences:

url: https://example.com/
diff_tool: wdiff

Note that diff_tool specifies an external command-line tool, so that tool must be installed separately (e.g. apt install wdiff on Debian or brew install wdiff on macOS). Syntax highlighting is supported for wdiff-style output, but potentially not for other diff tools.

If you would like to ignore whitespace changes so that you don't receive notifications for trivial differences, you can use diff_tool for this. For example:

diff_tool: "diff --ignore-all-space --unified"

When using another external diff-like tool, make sure it returns unified output format to retain syntax highlighting.

The diff_filter feature can be used to filter the diff output text with the same tools (see Filters) used for filtering web pages.

In order to show only diff lines with added lines, use:

url: http://example.com/things-get-added.html
  - grep: '^[@+]'

This will only keep diff lines starting with @ or +. Similarly, to only keep removed lines:

url: http://example.com/things-get-removed.html
  - grep: '^[@-]'

More sophisticated diff filtering is possibly by combining existing filters, writing a new filter or using shellpipe to delegate the filtering/processing of the diff output to an external tool.

Read the next section if you want to disable empty notifications.

As an extension to the previous example, let's say you want to only get notified with all lines added, but receive no notifications at all if lines are removed.

A diff usually looks like this:

--- @       Fri, 04 Mar 2022 19:58:14 +0100
+++ @       Fri, 04 Mar 2022 19:58:22 +0100
@@ -1,3 +1,3 @@

We want to filter all lines starting with "+" only, but because of the headers we also want to filter lines that start with "+++", which can be accomplished like so:

url: http://example.com/only-added.html
  - grep: '^[+]'      # Include all lines starting with "+"
  - grepi: '^[+]{3}'  # Exclude the line starting with "+++"

This deals with all diff lines now, but since urlwatch reports "changed" pages even when the diff_filter returns an empty string (which might be useful in some cases), you have to explicitly opt out by using urlwatch --edit-config and setting the empty-diff option to false in the display category:

  empty-diff: false

In some situations, it might be useful to run a script with the diff as input when changes were detected (e.g. to start an update or process something). This can be done by combining diff_filter with the shellpipe filter, which can be any custom script.

The output of the custom script will then be the diff result as reported by urlwatch, so if it outputs any status, the CHANGED notification that urlwatch does will contain the output of the custom script, not the original diff. This can even have a "normal" filter attached to only watch links (the css: a part of the filter definitions):

url: http://example.org/downloadlist.html
  - css: a
  - shellpipe: /usr/local/bin/process_new_links.sh

To compare the visual contents of web pages, Nicolai has written pyvisualcompare https://github.com/nspo/pyvisualcompare as a frontend (with GUI) to urlwatch. The tool can be used to select a region of a web page. It then generates a configuration for urlwatch to run pyvisualcompare and generate a hash for the screen contents.

In some cases, it might be useful to ignore (temporary) network errors to avoid notifications being sent. While there is a display.error config option (defaulting to true) to control reporting of errors globally, to ignore network errors for specific jobs only, you can use the ignore_connection_errors key in the job list configuration file:

url: https://example.com/
ignore_connection_errors: true

Similarly, you might want to ignore some (temporary) HTTP errors on the server side:

url: https://example.com/
ignore_http_error_codes: 408, 429, 500, 502, 503, 504

or ignore all HTTP errors if you like:

url: https://example.com/
ignore_http_error_codes: 4xx, 5xx

For web pages with misconfigured HTTP headers or rare encodings, it may be useful to explicitly specify an encoding from Python’s Standard Encodings https://docs.python.org/3/library/codecs.html#standard-encodings.

url: https://example.com/
encoding: utf-8

By default, url jobs timeout after 60 seconds. If you want a different timeout period, use the timeout key to specify it in number of seconds, or set it to 0 to never timeout.

url: https://example.com/
timeout: 300

It is possible to add cookies to HTTP requests for pages that need it, the YAML syntax for this is:

url: http://example.com/
    Key: ValueForKey
    OtherKey: OtherValue

If a webpage frequently changes between several known stable states, it may be desirable to have changes reported only if the webpage changes into a new unknown state. You can use compared_versions to do this.

url: https://example.com/
compared_versions: 3

In this example, changes are only reported if the webpage becomes different from the latest three distinct states. The differences are shown relative to the closest match.

If you are watching pages that change seldomly, but you still want to be notified daily if urlwatch still works, you can watch the output of the date command, for example:

name: "urlwatch watchdog"
command: "date"

Since the output of date changes every second, this job should produce a report every time urlwatch is run.

If you want to use Redis as a cache backend over the default SQLite3 file:

urlwatch --cache=redis://localhost:6379/

There is no migration path from the SQLite3 format, the cache will be empty the first time Redis is used.

Since pages on the Tor Network https://www.torproject.org are not accessible via public DNS and TCP, you need to either configure a Tor client as HTTP/HTTPS proxy or use the torify(1) tool from the tor package (apt install tor on Debian, brew install tor on macOS). Setting up Tor is out of scope for this document. On a properly set up Tor installation, one can just prefix the urlwatch command with the torify wrapper to access .onion pages:

torify urlwatch

If you want to be notified of new events on a public Facebook page, you can use the following job pattern, replace PAGE with the name of the page (can be found by navigating to the events page on your browser):

url: http://m.facebook.com/PAGE/pages/permalink/?view_type=tab_events
  - css:
      selector: div#objects_container
      exclude: 'div.x, #m_more_friends_who_like_this, img'
  - re.sub:
      pattern: '(/events/\d*)[^"]*'
      repl: '\1'
  - html2text: pyhtml2text

When using the lynx method in the html2text filter, it uses a default width that will cause additional line breaks to be inserted.

To set the lynx output width to 400 characters, use this filter setup:

url: http://example.com/longlines.html
  - html2text:
      method: lynx
      width: 400

For browser jobs, you can configure how long the headless browser will wait before a page is considered loaded by using the wait_until option.

It can take one of four values (see wait_until docs https://playwright.dev/python/docs/api/class-page#page-goto-option-wait-until of Playwright):

  • load - consider operation to be finished when the load event is fired
  • domcontentloaded - consider operation to be finished when the DOMContentLoaded event is fired
  • networkidle - discouraged consider operation to be finished when there are no network connections for at least 500 ms. Don't use this method for testing, rely on web assertions to assess readiness instead
  • commit - consider operation to be finished when network response is received and the document started loading

In some cases (e.g. when the diff_tool or diff_filter executes some external command as a side effect that should also run for the initial page state), you can set the treat_new_as_changed to true, which will make the job report as CHANGED instead of NEW the first time it is retrieved (and the diff will be reported, too).

url: http://example.com/initialpage.html
treat_new_as_changed: true

This option will also change the behavior of --test-diff-filter, and allow testing the diff filter if only a single version of the page has been retrieved.

Because urlwatch uses the url/navigate (for URL/Browser jobs) and/or the command (for Shell jobs) key as unique identifier, each URL can only appear in a single job. If you want to monitor the same URL multiple times, you can append #1, #2, ... (or anything that makes them unique) to the URLs, like this:

name: "Looking for Thing A"
url: http://example.com/#1
  - grep: "Thing A"
name: "Looking for Thing B"
url: http://example.com/#2
  - grep: "Thing B"

Job history is stored based on the value of the url parameter, so updating a job's URL in the configuration file urls.yaml will create a new job with no history. Retain history by using --change-location:

urlwatch --change-location http://example.org#old http://example.org#new

The command also works with Browser and Shell jobs, changing navigate and command respectively.

To run one or more specific jobs instead of all known jobs, provide the job index numbers to the urlwatch command. For example, to run jobs with index 2, 4, and 7:

urlwatch 2 4 7

To simulate submitting a HTML form using the POST method, you can pass the form fields in the data field of the job description:

name: "My POST Job"
url: http://example.com/foo
  username: "foo"
  password: "bar"
  submit: "Send query"

By default, the request will use the HTTP POST method, and the Content-type will be set to application/x-www-form-urlencoded.

It is possible to customize the HTTP method and Content-type header, allowing you to send arbitrary requests to the server:

name: "My PUT Request"
url: http://example.com/item/new
method: PUT
  Content-type: application/json
data: '{"foo": true}'

urlwatch(1), urlwatch-intro(7), urlwatch-jobs(5), urlwatch-filters(5), urlwatch-config(5), urlwatch-reporters(5)

On Windows, the default file encoding might be locale-specific and not work correctly if files are saved using the (recommended) UTF-8 encoding.

If you are having problems loading UTF-8-encoded files on Windows, you might see an issue like the following when urlwatch parses your config files:

UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 214: character maps to <undefined>

To work around this issue, Python 3.7 and newer have a new UTF-8 Mode https://peps.python.org/pep-0540/ that can be enabled by setting the environment variable PYTHONUTF8 to 1:


You can also add this environment variable to your user environment or system environment to apply the UTF-8 Mode to all Python programs on your machine.

2023 Thomas Perl

May 3, 2023