WWW::Mechanize::Cookbook(3) User Contributed Perl Documentation WWW::Mechanize::Cookbook(3)

WWW::Mechanize::Cookbook - Recipes for using WWW::Mechanize

version 2.18

First, please note that many of these are possible just using LWP::UserAgent. Since "WWW::Mechanize" is a subclass of LWP::UserAgent, whatever works on "LWP::UserAgent" should work on "WWW::Mechanize". See the lwpcook man page included with LWP.

use WWW::Mechanize;
my $mech = WWW::Mechanize->new( autocheck => 1 );

The "autocheck => 1" tells Mechanize to die if any IO fails, so you don't have to manually check. It's easier that way. If you want to do your own error checking, leave it out.

$mech->get( "http://search.cpan.org" );
print $mech->content;

"$mech->content" contains the raw HTML from the web page. It is not parsed or handled in any way, at least through the "content" method.

Sometimes you want to dump your results directly into a file. For example, there's no reason to read a JPEG into memory if you're only going to write it out immediately. This can also help with memory issues on large files.

$mech->get( "http://www.cpan.org/src/stable.tar.gz",
            ":content_file" => "stable.tar.gz" );

Generally, just call "credentials" before fetching the page.

$mech->credentials( 'admin' => 'password' );
$mech->get( 'http://10.11.12.13/password.html' );
print $mech->content();

Find all links that point to a JPEG, GIF or PNG.

my @links = $mech->find_all_links(
    tag => "a", url_regex => qr/\.(jpe?g|gif|png)$/i );

Find all links that have the word "download" in them.

my @links = $mech->find_all_links(
    tag => "a", text_regex => qr/\bdownload\b/i );

$mech->add_handler("request_send", sub { shift->dump; exit; });
$mech->get("http://www.example.com");

WWW::Mechanize

Andy Lester <andy at petdance.com>

This software is copyright (c) 2004 by Andy Lester.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.

2024-02-07 perl v5.38.1