Perl Elements to Avoid
Perl Elements to Avoid
Introduction
Often when people ask for help with Perl code, they show Perl code that suffers from many bad or outdated elements. This is expected, as there are many bad Perl tutorials out there, and lots of bad code that people have learned from, but it is still not desirable. In order to not get "yelled at" for using these, here is the document of the bad elements that people tend to use and some better practices that should be used instead.
A book I read said, that as opposed to most previous idea systems, they were trying to liquidate negatives instead of to instil positives in people. So in the spirit of liquidating negatives, this tutorial-in-reverse aims to show you what not to do.
Note: Please don't think this advice is meant as gospel. There are some instances where one can expect to deviate from it, and a lot of it can be considered only the opinion of its originators. I tried to filter the various pieces of advice I found in the sources and get rid of things that are either a matter of taste, or not so critical, or that have arguments for and against (so-called colour of the bike shed arguments), but some of the advice here may still be controversial.
Copyright © 2010 Shlomi Fish
This document is copyrighted by Shlomi Fish under the Creative Commons Attribution 3.0 Unported License.
- Introduction
- The List of Bad Elements
- No Indentation
- No "use strict;" and "use warnings;"
- Correct style for using the open function
- Calling variables "file"
- Identifiers without underscores
- Don't use prototypes for subroutines
- Ampersand in Subroutine Calls
- Assigning from $_
- Using "foreach" on lines
- String Notation
- Non-Lexical Loop Iterators
- Slurping a file (i.e: Reading it all into memory)
- Write code in Paragraphs using Empty Lines
- Use IO::Socket and friends instead of lower-level calls
- Subroutine Arguments Handling
- Avoid using chop() to trim newlines characters from lines
- Don't start Modules and Packages with a Lowercase Letter
- Avoid Indirect Object Notation
- $$myarray_ref[$idx] or $$myhash_ref{$key}
- C-style for loops
- Avoid Intrusive Commenting
- Accessing Object Slots Directly
- '^' and '$' in Regular Expressions
- Magic Numbers
- String Variables Enclosed in Double Quotes
- @array[$idx] for array subscripting
- Variables called $a and $b
- Flow Control Statements Without an Explicit Label
- ($#array + 1) and Other Abuses of $#.
- $array[$#array], $array[$#array-1], etc.
- Interpolating Strings into Regular Expressions
- Overusing $_
- Mixing Tabs and Spaces
- `…` or qx// for Executing Commands
- No Explicit Returns
- "Varvarname" - Using a variable as another variable's name.
- Several synchronised arrays.
- Use Leading Underscores ('_') for Internal Methods and Functions
- print $fh @args
- Using STDIN instead of ARGV
- Modifying arrays or hashes while iterating through them.
- Comments and Identifiers in a Foreign Language
- Using perlform for formatting text.
- Using $obj->new for object construction.
- Law of Demeter
- Passing parameters in delegation
- Duplicate Code
- Long Functions and Methods
- Using map instead of foreach for side-effects
- Using the ternary operator for side-effects instead of if/else
- Nested top-level subroutines
- Using grep instead of any and friends
- Using the FileHandle Module
- "Including" files instead of using Modules
- Using Global Variables as an Interface to the Module
- Declaring all variables at the top ("predeclarations")
- Using Switch.pm
- Using threads in Perl
- Calling Shell Commands Too Much
- Missing Semicolons at the end of blocks
- List form of open with one argument.
- Trailing Whitespace
- Misusing String Eval
- Named Parameters That Start With Dash
- Code and Markup Injection
- Initializing Arrays and Hashes from Anonymous References
- Overly Long Lines in the Source Code
- Getting rid of special entries in directory contents
- Assigning a List to a Scalar Variable
- Regular Expressions starting or ending with “.*”
- Recursive Directory Traversal Without Using File::Find and Friends
- Using File::Find for listing the contents of a directory non-recursively
- Populating an Array with Multiple Copies of the Same Reference
- Conditional my declarations.
- Using One Variable for Two (or More) Different Purposes
- Using \1 instead of $1 on the Right Hand Side of a Substitution
- Appending using $array[$i++] = $value_to_append;
- Premature Optimisation
- Not Using Version Control
- Writing Automated Tests
- Using a Continuous Integration System
- How to Properly Autoflush
- Conditional "use" statements
- Parsing XML/HTML/JSON/CSV/etc. using regular expressions
- Using most of the Perl punctuation variables from perlvar
- Generating invalid Markup (of HTML/etc.)
- Capturing Instead of Clustering in Regular Expressions
- Using Regex Captures Without Checking if the Match was Successful
- Using select($file_handle)
- Non-Recommended Regular Expression-related Variables
- Not Packaging as CPAN-like Distributions
- Not Using a Bug Tracker/Issue Tracker
- Unrelated packages inside modules
- Non-explicitly-imported symbols
- Excessive calls to subroutines in other packages
- Using the Scalar Multidimensional Hashes Emulation
- Using the "-w" flag.
- “use Module qw($VERSION @IMPORTS)”
- Mutating Variables by referring to them again (e.g: “$x = $x + 1”)
- Using the "**" operator for integer power/exponentiation
- TAP test suites that fail when run in parallel
- Using character classes (e.g "\d") with Unicode text
- Using empty regular expressions patterns
- Filenames with case collisions
- "plan skip_all" with "use Test::More tests => [count];"
- split()'s third "limit" argument
- Standalone XML tags in HTML
- Sources of This Advice
- Further Reading
The List of Bad Elements
No Indentation
Indentation means that the contents of every block are promoted from their containing environment by using a shift of some space. This makes the code easier to read and follow.
Code without indentation is harder to read and so should be avoided. The Wikipedia article lists several styles - pick one and follow it.
No "use strict;" and "use warnings;"
All modern Perl code should have the "use strict;" and "use warnings;" pragmas that prevent or warn against misspelling variable names, using undefined values, and other bad practices. So start your code (in every file) with this:
use strict; use warnings;
Or:
package MyModule; use strict; use warnings;
Correct style for using the open function
The open function is used to open files, sub-processes, etc. The correct style for it is:
open my $input_fh, "<", $input_filename or die "Could not open '$input_filename' - $!";
some wrong, insecure and/or outdated styles are:
# Bad code # Bareword filehandle (type glob), two arguments open (insecure) and no # error handling open INPUT, "<$filename"; # Also opens from $INPUT. open INPUT; # Bareword filehandle with three args open and no exception thrown. open INPUT, "<", $filename; # Bareword filehandle with two-args open and exception (rare, but possible): open INPUT, "<$filename" or die "Cannot open $filename - $!"; # Lexical file handle with two-args open (instead of three-args open) # and no exception open my $input_fh, "<$filename";
Calling variables "file"
Some people call their variables "file". However, file can mean either file handles, file names, or the contents of the file. As a result, this should be avoided and one can use the abbreviations "fh" for file handle, or "fn" for filenames instead.
Identifiers without underscores
Some people name their identifiers as several words all in lowercase and not separated by underscores ("_"). As a result, this makes the code harder to read. So instead of:
my @namesofpresidents;
Say:
my @names_of_presidents;
Or maybe:
my @presidents_names;
Don't use prototypes for subroutines
Some people are tempted to declare their subroutines using sub my_function ($$@)
, with the signature of the accepted parameter types, which is called a prototype. However, this tends to break code more often than not, and should be avoided.
If you're looking for parameter lists to functions and methods, take a look at Devel-Declare from CPAN. But don't use prototypes.
For more information, see:
“Why are Perl 5’s function prototypes bad?” on Stack Overflow.
Ampersand in Subroutine Calls
One should not call a subroutine using &myfunc(@args)
unless you're sure that is what you want to do (like overriding prototypes). Normally saying myfunc(@args)
is better.
For more information see the relevant post and discussion on Dave Cross’s Perl Hacks site.
Assigning from $_
Some people write code like the following:
while (<$my_fh>) { my $line = $_; # Do something with $line… }
Or:
foreach (@users) { my $user = $_; # Process $user… }
However, you can easily assign the explicit and lexical variables in the loop's opening line like so:
while (my $line = <$my_fh>) { # Do something with $line… }
and:
foreach my $user (@users) { # Process $user… }
Using "foreach" on lines
Some people may be tempted to write this code:
foreach my $line (<$my_file_handle>) { # Do something with $line. }
This code appears to work but what it does is read the entire contents of the file pointed by $my_file_handle into a (potentially long) list of lines, and then iterate over them. This is inefficient. In order to read one line at a time, use this instead:
while (my $line = <$my_file_handle>) { # Do something with $line. }
String Notation
Perl has a flexible way to write strings and other delimiters, and you should utilize it for clarity. If you find yourself writing long strings, write them as here-documents:
my $long_string_without_interpolation = <<'EOF'; Hello there. I am a long string. I am part of the string. And so am I. EOF
There are also <<"EOF"
for strings with interpolation and <<`EOF`
for trapping command output. Make sure you never use bareword here documents <<EOF
which are valid syntax, but many people are never sure whether they are <<"EOF"
or <<'EOF'
.
If your strings are not too long but contain the special characters that correspond with the default delimiters (e.g: '
, "
, `
, /
etc.), then you can use the initial letter followed by any arbitrary delimiter notation: m{\A/home/sophie/perl}
, q/My name is 'Jack' and I called my dog "Diego"./
.
Non-Lexical Loop Iterators
When writing foreach
loops, one should declare the iterator using my
instead of pre-declaring it and using something like: foreach $number (@numbers)
(you did use use strict;
, right?). Otherwise, the iteration variable will be aliased using dynamic scoping, and its value in the loop won't be preserved. So instead of:
# Bad code my $number; foreach $number (@numbers) { # do something with $number. }
You should write:
foreach my $number (@numbers) { # do something with $number. } # Now $number is gone.
Slurping a file (i.e: Reading it all into memory)
One can see several bad ways to read a file into memory in Perl. Among them are:
# Not portable and suffers from possible # shell code injection. my $contents = `cat $filename`; # Wasteful of CPU and memory: my $contents = join("", <$fh>); # Even more so: my $contents = ''; while (my $line = <$fh>) { $contents .= $line; }
You should avoid them all. Instead the proper way to read an entire file into a long string is to either use CPAN distributions for that such as Path-Tiny or IO-All, or alternatively write down the following function and use it:
sub _slurp { my $filename = shift; open my $in, '<', $filename or die "Cannot open '$filename' for slurping - $!"; local $/; my $contents = <$in>; close($in); return $contents; }
Write code in Paragraphs using Empty Lines
If one of your blocks is long, split it into "code paragraphs", with empty lines between them and with each paragraph doing one thing. Then, it may be a good idea to precede each paragraph with a comment explaining what it does, or to extract it into its own function or method.
Use IO::Socket and friends instead of lower-level calls
One should use the IO::Socket family of modules for networking Input/Output instead of the lower-level socket()/connect()/bind()/etc. calls. As of this writing, perlipc contains outdated information demonstrating how to use the lower-level API which is not recommended.
Subroutine Arguments Handling
The first thing to know about handling arguments for subroutines is to avoid referring to them directly by index. Imagine you have the following code:
sub my_function { my $first_name = $_[0]; my $street = $_[1]; my $city = $_[2]; my $country = $_[3]; . . . }
Now, what if you want to add $last_name
between $first_name
and $street
? You'll have to promote all the indexes after it! Moreover, this scheme is error-prone and you may reuse the same index more than once, or miss some indexes.
Instead do either:
sub my_function { my $first_name = shift; my $street = shift; my $city = shift; my $country = shift; . . . }
Or:
sub my_function { my ($first_name, $street, $city, $country) = @_; . . . }
The same thing holds for unpacking @ARGV
, the array containing the command-line arguments for a Perl program, or any other array. Don't use $ARGV[0]
, $ARGV[1]
etc. directly, but instead unpack @ARGV
using the methods given above. For processing command line arguments, you should also consider using Getopt::Long.
Don’t clobber arrays or hashes
Often people ask how to pass arrays or hashes to subroutines. The answer is that the right way to do it is to pass them as a reference as an argument to the subroutine:
sub calc_polynomial { my ($x, $coefficients) = @_; my $x_power = 1; my $result = 0; foreach my $coeff (@{$coefficients}) { $result += $coeff * $x_power; } continue { $x_power *= $x; } return $result; } print "(4*x^2 + 2x + 1)(x = 5) = ", calc_polynomial(5, [1, 2, 4]);
You shouldn't clobber the subroutine's arguments list with entire arrays or hashes (e.g: my_func(@array1, @array2);
or my_func(%myhash, $scalar)
), as this will make it difficult to extract from @_
.
Named Parameters
If the number of parameters that your subroutine accepts gets too long, or if you have too many optional parameters, make sure you convert it to use named arguments. The standard way to do it is to pass a hash reference or a hash of arguments to the subroutine:
sub send_email { my $args = shift; my $from_address = $args->{from}; my $to_addresses = $args->{to}; my $subject = $args->{subject}; my $body = $args->{body}; . . . } send_email( { from => 'shlomif@perl-begin.org', to => ['shlomif@perl-begin.org', 'sophie@perl-begin.org'], subject => 'Perl-Begin.org Additions', . . . } );
Avoid using chop() to trim newlines characters from lines
Don't use the built-in function chop() in order to remove newline characters from the end of lines read using the diamond operator (<>
), because this may cause the last character in a line without a line feed character to be removed. Instead, use chomp().
If you expect to process DOS/Windows-like text files whose lines end with the dual Carriage Return-Line Feed character on Unix systems then use the following in order to trim them: $line =~ s/\x0d?\x0a\z//;
.
For more information see:
"Understanding Newlines" - by Xavier Noria on OnLAMP.com.
Don't start Modules and Packages with a Lowercase Letter
Both modules and packages (the latter also known as namespaces) and all intermediate components thereof should always start with an uppercase letter, because modules and packages that start with a lowercase letter are reserved for pragmas. So this is bad:
# Bad code # This is file person.pm package person; use strict; use warnings; 1;
And this would be better:
# Better code! # This is file MyProject/Person.pm package MyProject::Person; use strict; use warnings; . . . 1;
Avoid Indirect Object Notation
Don't use the so-called “Indirect-object notation” which can be seen in a lot of old code and tutorials and is more prone to errors:
# Bad code my $new_object = new MyClass @params;
Instead use the MyClass->new(…)
notation:
my $new_object = MyClass->new(@params);
For more information and the motivation for this advice, see chromatic’s article “The Problems with Indirect Object Notation”.
$$myarray_ref[$idx] or $$myhash_ref{$key}
Don't write $$myarray_ref[$idx]
, which is cluttered and can be easily confused with (${$myarray_ref})->[$idx]
. Instead, use the arrow operator - $myarray_ref->[$idx]
. This also applies for hash references - $myhash_ref->{$key}
.
C-style for loops
Some beginners to Perl tend to use C-style-for-loops to loop over an array's elements:
for (my $i=0 ; $i < @array ; $i++) { # Do something with $array[$i] }
However, iterating over the array itself would normally be preferable:
foreach my $elem (@array) { # Do something with $elem. }
If you still need the index, do:
foreach my $idx (0 .. $#array) { my $elem = $array[$idx]; # Do something with $idx and $elem. } # perl-5.12.0 and above: foreach my $idx (keys(@array)) { my $elem = $array[$idx]; # Do something with $idx and $elem. } # Also perl-5.12.0 and above. while (my ($idx, $elem) = each(@array)) { # Do something with $idx and $elem. }
An arbitrary C-style for loop can be replaced with a while loop with a “continue” block.
Avoid Intrusive Commenting
Some commenting is too intrusive and interrupts the flow of reading the code. Examples for that are the ########################
hard-rules that some people put in their code, the comments using multiple number signs ("#"), like ####
, or excessively long comment block. Please avoid all those.
Some schools of software engineering argue that if the code's author feels that a comment is needed, it usually indicates that the code is not clear and should be factored better (like extracting a method or a subroutine with a meaningful name.). It probably does not mean that you should avoid writing comments altogether, but excessive commenting could prove as a red flag.
If you're interested in documenting the public interface of your modules and command-line programs, refer to perlpod, Perl's Plain Old Documentation (POD), which allows one to quickly and easily document one's code. POD has many extensions available on CPAN, which may prove of use.
Accessing Object Slots Directly
Since Perl objects are simple references some programmers are tempted to access them directly:
# Bad code $self->{'name'} = "John"; print "I am ", $self->{'age'}, " years old\n"; # Or even: (Really bad code) $self->[NAME()] = "John";
However, this is sub-optimal as explained in the Perl for Newbies section about "Accessors", and one should use accessors using code like that:
# Good code. $self->_name("John"); print "I am ", $self->_age(), " years old\n";
As noted in the link, you can use one of CPAN's many accessor generators to generate accessors for you.
'^' and '$' in Regular Expressions
Some people use "^" and "$" in regular expressions to mean beginning-of-the-string or end-of-the-string. However, they can mean beginning-of-a-line and end-of-a-line respectively using the /m
flag which is confusing. It's a good idea to use \A
for start-of-string and \z
for end-of-string always, and to specify the /m
flag if one needs to use "^" and "$" for start/end of a line.
Magic Numbers
Your code should not include unnamed numerical constants also known as "magic numbers" or "magic constants". For example, there is one in this code to shuffle a deck of cards:
# Bad code for my $i (0 .. 51) { my $j = $i + int(rand(52-$i)); @cards[$i,$j] = @cards[$j,$i]; }
This code is bad because the meaning of 52 and 51 is not explained and they are arbitrary. A better code would be:
# Good code. # One of: my $deck_size = 52; Readonly my $deck_size => 52; for my $i (0 .. $deck_size-1) { my $j = $i + int(rand($deck_size-$i)); @cards[$i,$j] = @cards[$j,$i]; }
(Of course in this case, you may opt to use a shuffle function from CPAN, but this is just for the sake of demonstration.).
String Variables Enclosed in Double Quotes
One can sometimes see people write code like that:
# Bad code my $name = shift(@ARGV); print "$name", "\n"; if ("$name" =~ m{\At}i) { print "Your name begins with the letter 't'"; }
However, it's not necessary to enclose $name in double quotes (i.e: "$name"
) because it's already a string. Using it by itself as $name
will do just fine:
# Better code. my $name = shift(@ARGV); print $name, "\n"; if ($name =~ m{\At}i) { print "Your name begins with the letter 't'"; }
Also see our page about text generation for other ways to delimit text.
Note that sometimes enclosing scalar variables in double-quotes makes sense - for example if they are objects with overloaded stringification. But this is the exception rather than the rule.
@array[$idx] for array subscripting
Some newcomers to Perl 5 would be tempted to write @array[$index]
to subscript a single element out of the array @array
. However, @array[$index]
is a single-element array slice. To get a single subscript out of @array
use $array[$idx]
(with a dollar sign). Note that if you want to extract several elements, you can use an array slice such as @array[@indexes]
or @array[$x,$y] = @array[$y,$x]
. However, then it's a list which should be used in list context.
Variables called $a and $b
One should not create lexical variables called $a
and $b
because there are built-in-variables called that used for sorting and other uses (such as reduce in List::Util), which the lexical variables will interfere with:
# Bad code my ($a, $b) = @ARGV; . . . # Won't work now. my @array = sort { length($a) <=> length($b) } @other_array;
Instead, use other single-letter variable names such as $x
and $y
, or better yet give more descriptive names.
Flow Control Statements Without an Explicit Label
One can sometimes see flow-control statements such as next, last or redo used without an explicit label following them, in which case they default to re-iterating or breaking out of the innermost loop. However, this is inadvisable, because later on, one may modify the code to insert a loop in between the innermost loop and the flow control statement, which will break the code. So always append a label to "next", "last" and "redo" and label your loops accordingly:
LINES: while (my $line = <>) { if ($line =~ m{\A#}) { next LINES; } }
($#array + 1) and Other Abuses of $#.
The $#array
notation gives the last index in @array
and is always equal to the array length minus one. Some people use it to signify the length of the array:
# Bad code my @flags = ((0) x ($#names +1))
However this is unnecessary because one can better do it by evaluating @names
in scalar context, possibly by saying scalar(@names)
.
# Better code. my @flags = ((0) x @names);
$array[$#array], $array[$#array-1], etc.
One can sometimes see people reference the last elements of arrays using notation such as $array[$#array]
, $array[$#array-1]
or even $array[scalar(@array)-1]
. This duplicates the identifier and is error prone and there's a better way to do it in Perl using negative indexes. $array[-1]
is the last element of the array, $array[-2]
is the second-to-last, etc.
Interpolating Strings into Regular Expressions
One can often see people interpolate strings directly into regular expressions:
# Bad code my $username = shift(@ARGV); open my $pass_fh, '<', '/etc/passwd' or die "Cannot open /etc/passwd - $!"; PASSWD: while (my $line = <$pass_fh>) { if ($line =~ m{\A$username}) # Bad code here. { print "Your username is in /etc/passwd\n"; last PASSWD; } } close($pass_fh);
The problem is that when a string is interpolated into a regular expression it is interpolated as a mini-regex, and special characters there behave like they do in a regular expression. So if I input '.*'
into the command line in the program above, it will match all lines. This is a special case of code or markup injection.
The solution to this is to use \Q and \E to signify a quotemeta() portion that will treat the interpolated strings as plaintext with all the special characters escaped. So the line becomes: if ($line =~ m{\A\Q$username\E})
.
Alternatively, if you do intend to interpolate a sub-regex, signify this fact with a comment. And be careful with regular expressions that are accepted from user input.
Overusing $_
It's a good idea not to overuse $_
because using it, especially in large scopes, is prone to errors, including many subtle ones. Most Perl operations support operating on other variables and you should use lexical variables with meaningful names instead of $_ whenever possible.
Some places where you have to use $_
are map, grep and other functions like that, but even in that case it might be desirable to set a lexical variable to the value of $_
right away: map { my $line = $_; … } @lines
.
Mixing Tabs and Spaces
Some improperly configured text editors may be used to write code that, while indented well at a certain tab size looks terrible on other tab sizes, due to a mixture of tabs and spaces. So either use tabs for indentation or make sure your tab key expands to a constant number of spaces. You may also wish to make use of Perl-Tidy to properly format your code.
`…` or qx// for Executing Commands
Some people are tempted to use backticks (`…`
) or qx/…/
for executing commands for their side-effects. E.g:
# Bad code use strict; use warnings; my $temp_file = "tempfile.txt"; `rm -f $temp_file`;
However, this is not idiomatic because `…`
and qx/…/
are used to trap a command's output and to return it as a big string or as a list of lines. It would be a better idea to use system() or to seek more idiomatic Perl-based solutions on CPAN or in the Perl core (such as using unlink() to delete a file in our case.).
Some people even go and ask how to make the qx/…/
output go to the screen, which is a clear indication that they want to use system().
No Explicit Returns
As noted in "Perl Best Practices", all functions must have an explicit “return” statement, as otherwise they implicitly return the last expression, which would be subject to change upon changing the code. If you don't want the subroutine to return anything (i.e: it's a so-called "procedure"), then write return;
to always return a false value, which the caller won't be able to do anything meaningful with.
Another mistake is to write "return 0;" or "return undef;" to return false, because in list context, they will return a one-element list which is considered true. So always type return;
to return false.
"Varvarname" - Using a variable as another variable's name.
Mark Jason Dominus has written about varvarname - "Why it's stupid to `use a variable as a variable name'", namely if $myvar
is 'foobar'
they want to operate on the value of $foobar
. While there are ways to achieve similar things in Perl, the best way is to use hashes (possibly pointing to complex records with more information) and lookup them by the string you want to use. Read the link by Mark Jason Dominus for more information.
Several synchronised arrays.
Related to “varvarname” is the desire of some beginners to use several different arrays with synchronised content, so the same index at every array will contain a different piece of data for the same record:
# Bad code my @names; my @addresses; my @ages; my @phone_numbers; . . . push @names, 'Isaac Newton'; push @addresses, '10 Downing St.'; push @ages, 25; push @phone_numbers, '123456789';
These arrays will become hard to synchronise, and this is error prone. A better idea would be to use an array (or a different data structure) of hash references or objects:
my @people; push @people, Person->new( { name => 'Isaac Newton', address => '10 Downing St.', age => 25, phone_number => '123456789', }, );
Use Leading Underscores ('_') for Internal Methods and Functions
When writing a module use leading underscores in identifiers of methods and functions to signify those that are: 1. Subject to change. 2. Are used internally by the module. 3. Should not be used from outside. By using Pod-Coverage one can make sure that the external API of the module is documented and it will skip the identifiers with leading underscores, that can be thought of as "private" ones.
Here's an example:
package Math::SumOfSquares; use strict; use warnings; use List::Util qw(sum); sub _square { my $n = shift; return $n * $n; } sub sum_of_squares { my ($numbers) = @_; return sum(map { _square($_) } @$numbers); } 1;
print $fh @args
It is preferable to write print {$write_fh} @args
over print $write_fh @args
because the latter can more easily be mistaken for print $write_fh, @args
(which does something different) and does not provide enough visual hints that you are writing to the $write_fh
filehandle. Therefore, always wrap the file-handle in curly braces (so-called "dative block"). (Inspired by "Perl Best Practices").
Using STDIN instead of ARGV
One can write code while reading from STDIN:
# Bad code use strict; use warnings; # Strip comments. LINES: while (my $line = <STDIN>) { if ($line =~ m{\A *#}) { next LINES; } print $line; }
However, it is usually better to use ARGV
instead of STDIN
because it also allows processing the filenames from the command line. This can also be achieved by simply saying <>
. So the code becomes:
# Better code: use strict; use warnings; # Strip comments. LINES: while (my $line = <>) { if ($line =~ m{\A *#}) { next LINES; } print $line; }
Modifying arrays or hashes while iterating through them.
Some people ask about how to add or remove elements to an existing array or hash when iterating over them using “foreach” and other loops. The answer to that is that Perl will likely not handle it too well, and it expects that during loops the keys of a data structure will remain constant.
The best way to achieve something similar is to populate a new array or hash during the loop by using push() or a hash lookup and assignment. So do that instead.
Comments and Identifiers in a Foreign Language
Apparently, many non-native English speakers write code with comments and even identifiers in their native language. The problem with this is that programmers who do not speak that language will have a hard time understanding what is going on here, especially after the writers of the foreign language code post it in to an Internet forum in order to get help with it.
Consider what Eric Raymond wrote in his "How to Become a Hacker" document (where hacker is a software enthusiast and not a computer intruder):
4. If you don't have functional English, learn it.
As an American and native English-speaker myself, I have previously been reluctant to suggest this, lest it be taken as a sort of cultural imperialism. But several native speakers of other languages have urged me to point out that English is the working language of the hacker culture and the Internet, and that you will need to know it to function in the hacker community.
Back around 1991 I learned that many hackers who have English as a second language use it in technical discussions even when they share a birth tongue; it was reported to me at the time that English has a richer technical vocabulary than any other language and is therefore simply a better tool for the job. For similar reasons, translations of technical books written in English are often unsatisfactory (when they get done at all).
Linus Torvalds, a Finn, comments his code in English (it apparently never occurred to him to do otherwise). His fluency in English has been an important factor in his ability to recruit a worldwide community of developers for Linux. It's an example worth following.
Being a native English-speaker does not guarantee that you have language skills good enough to function as a hacker. If your writing is semi-literate, ungrammatical, and riddled with misspellings, many hackers (including myself) will tend to ignore you. While sloppy writing does not invariably mean sloppy thinking, we've generally found the correlation to be strong — and we have no use for sloppy thinkers. If you can't yet write competently, learn to.
So if you're posting code for public scrutiny, make sure it is written with English identifiers and comments.
Using perlform for formatting text.
One should not use “perlform” for formatting text, because it makes use of global identifiers, and should use the Perl6-Form CPAN distribution instead. Also see our text generation page for more information. (Inspired by "Perl Best Practices").
Using $obj->new for object construction.
Sometimes you can see class constructors such as:
# Bad code sub new { my $proto = shift; my $class = ref($proto) || $proto; my $self = {}; … }
The problem here is that this allows one to do $my_object_instance->new
to create a new instance of the object, but many people will expect it to be invalid or to clone the object. So don't do that and instead write your constructors as:
# Better code: sub new { my $class = shift; my $self = {}; bless $self, $class; … }
Which will disable it and will just allow ref($my_object_instance)->new(…)
. If you need a clone method, then code one called "clone()" and don't use "new" for that.
(Thanks to Randal L. Schwartz's post "Constructing Objects" for providing the insight to this).
Law of Demeter
See the Wikipedia article about "Law of Demeter" for more information. Namely, doing many nested method calls like $self->get_employee('sophie')->get_address()->get_street()
is not advisable, and should be avoided.
A better option would be to provide methods in the containing objects to access those methods of their contained objects. And an even better way would be to structure the code so that each object handles its own domain.
Passing parameters in delegation
Sometimes we encounter a case where subroutines each pass the same parameter to one another in delegation, just because the innermost subroutines in the callstack need it.
To avoid it, create a class, and declare methods that operate on the fields of the class, where you can assign the delegated arguments.
Duplicate Code
As noted in Martin Fowler's "Refactoring" book (but held as a fact for a long time beforehand), duplicate code is a code smell, and should be avoided. The solution is to extract duplicate functionality into subroutines, methods and classes.
Long Functions and Methods
Another common code smell is long subroutines and methods. The solution to these is to extract several shorter methods out, with meaningful names.
Using map instead of foreach for side-effects
You shouldn't be using map to iterate on a list instead of foreach if you're not interested in constructing a new list and all you are interested in are the side-effects. For example:
# Bad code use strict; use warnings; map { print "Hello $_!\n"; } @ARGV;
Would be better written as:
use strict; use warnings; foreach my $name (@ARGV) { print "Hello $name!\n"; }
Which better conveys one's intention and may be a bit more efficient.
Using the ternary operator for side-effects instead of if/else
A similar symptom to the above is people who wish to use the ternary inline- conditional operator (? :
) for choosing to execute between two different statements with side-effects instead of using if
and else
. For example:
# Bad code $cond_var ? ($hash{'if_true'} .= "Cond var is true") : ($hash{'if_false'} .= "Cond var is false")
(This is assuming the ternary operator was indeed written correctly, which is not always the case).
However, the ternary operator is meant to be an expression that is a choice between two values and should not be used for its side-effects. To do the latter, just use if
and else
:
if ($cond_var) { $hash{'if_true'} .= "Cond var is true"; } else { $hash{'if_false'} .= "Cond var is false"; }
This is safer, and better conveys one’s intentions.
For more information, refer to a relevant thread on the Perl beginners mailing list (just make sure you read it in its entirety).
Nested top-level subroutines
One should not nest an inner top-level subroutine declared using sub inner
inside of an outer one, like so:
# Bad code sub outer { sub inner { . . . } # Use inner here }
This code will compile and run, but may break in subtle ways.
The first problem with this approach is that inner()
will still be visible outside outer()
, but the more serious problem is that the inner subroutine will only get one copy of the lexical variables inside outer()
.
The proper and safer way to declare an inner subroutine is to declare a lexical variable and set it to an anonymous subroutine, which is also known as a closure:
sub outer { my ($foo, $bar) = @_; my $print_foo = sub { print "Foo is '$foo'\n"; return; }; $print_foo->(); $foo++; $print_foo->(); return; }
Using grep instead of any and friends
Sometimes one can see people using grep to find the first matching element in an array, or whether such an element exists at all. However, grep is intended to extract all matching elements out of a list, not just the first one, and as a result will not stop until it finds them all. To remedy this look at either first()
(to find the first match) or "any/all/notall/none" (to find whether a single element exists), all from List::Util. These better convey one's intention and may be more efficient because they stop on the first match.
One should note that if one does such lookups often, then they should try to use a hash instead.
Using the FileHandle Module
The FileHandle module is old and bad, and should not be used. One should use the IO::Handle family of modules instead.
"Including" files instead of using Modules
We are often asked how one can "include" a file in a Perl program (similar to PHP's include or the shell's "source" or "." operators. The answer is that the better way is to extract the common functionality from all the programs into modules and load them by using "use" or "require".
Note that do can be used to evaluate a file (but in a different scope), but it's almost always not needed.
Some people are looking to supply a common configuration to their programs as global variables in the included files, and those people should look at CPAN configuration modules such as Config-IniFiles or the various JSON modules for the ability to read configuration files in a safer and better way.
Using Global Variables as an Interface to the Module
While it is possible to a large extent, one should generally not use global variables as an interface to a module, and should prefer having a procedural or an object oriented interface instead. For information about this see our page about modules and packages and our our page about object oriented programming in Perl.
Declaring all variables at the top ("predeclarations")
Some inexperienced Perl programmers, possibly by influence from languages such as C, like to declare all variables used by the program at the top of the program or the relevant subroutines. This has been called "predeclarations":
# Bad code my $first_name; my $last_name; my $address; my @people; my %cities; . . .
However, this is bad form in Perl, and the preferable way is to declare all the variables when they are first used, and at the innermost scope where they should retain their value. This will allow to keep track of them better.
Using Switch.pm
One should not use Switch.pm to implement a switch statement because it's a source filter, tends to break a lot of code, and causes unexpected problems. Instead one should either use given/when
, which are only available in perl-5.10 and above, or dispatch tables, or alternatively plain if/elsif/else
structures.
Using threads in Perl
Some beginners, when thinking they need to multitask their programs start thinking they should use perl threads. However, as mentioned in perlthrtut, perl threads are very much unlike the traditional thread modules, share nothing by default and are in fact heavyweight processes (instead of the usual lightweight ones). See also Elizabeth Mattijsen’s write-up about perl's ithreads on perlmonks.
To sum up, usually threads are the wrong answer and you should be using forking processes or something like POE (see our page about multitasking) instead.
Calling Shell Commands Too Much
Some people are tempted to use shell commands for performing various tasks using `…`
, qx/…/
, system()
, piped-open, etc. However, usually Perl has built-in routines or alternatively CPAN modules, which are more portable, and often would be faster than calling the shell for help, and they should be used instead.
As an extreme example, the site The Daily WTF had a feature which featured the following code to determine the file size in Perl:
# Bad code my $filesize = `wc -c $file | cut -c0-8 | sed 's/ //g'`;
Reportedly, replacing this line with my $filesize = -s $file
(which as noted earlier should have been called $filename
instead), resulted in the program being 75 minutes faster on average (!).
Normally, if you find yourself using UNIX text processing commands such as “sed”, “awk”, “grep”, and “cut”, you should implement it in pure-Perl code.
Missing Semicolons at the end of blocks
The perl interpreter allows one to omit the last trailing semicolon (";") in the containing block. Like so:
# Bad code if ( COND() ) { print "Success!\n"; call_routine() # No semicolon here. }
However, this isn't a good idea, because it is inconsistent, and may cause errors (or obscure failures) if one-or-more statements are added afterwards.
As a result, you should end every statement with a semicolon (";") even i it’s the last one. A possible exception to this may be single-line and/or single-statement blocks like in map.
List form of open with one argument.
Recent versions of of perl introduced the list-forms of piping to and from a command, such as open my $fh, '-|', 'fortune', $collection
or open my $printer, '|-', 'lpr', '-Plp1'
. However, not only they are not implemented on Windows and other UNIX-like systems yet, but when one passes only one argument to them, they pass it to the shell verbatim.
As a result, if one passes an array variable to them, as in:
# Bad code open my $fh, '-|', @foo or die "Could not open program! - $!"
One can pass only a single argument to @foo
, which would be dangerous. To mitigate that, one should use the IPC-Run or the IPC-System-Simple CPAN distributions.
Trailing Whitespace
Using many text editors, it may be common to write new code or modify existing one, so that some lines will contain trailing whitespace, such as spaces (ASCII 32 or 0x20) or tabs characters. These trailing spaces normally do not cause much harm, but they are not needed, harm the code’s consistency, and may undermine analysis by patching/diffing and version control tools. Furthermore, they usually can be eliminated easily without harm.
Here is an example of having trailing whitespace demonstrated using the --show-ends
flag of the GNU cat command:
> cat --show-ends toss-coins.pl #!/usr/bin/env perl$ $ use strict;$ use warnings;$ $ my @sides = (0,0);$ $ my ($seed, $num_coins) = @ARGV;$ $ srand($seed); $ $ for my $idx (1 .. $num_coins)$ {$ $sides[int(rand(2))]++;$ $ print "Coin No. $idx\n";$ }$ $ print "You flipped $sides[0] heads and $sides[1] tails.\n";$ >
While you should not feel bad about having trailing whitespace, it is a good idea to sometimes search for them using a command such as ack '[ \t]+$'
(in version 1.x it should be ack -a '[ \t]+$'
, see ack), and get rid of them.
Some editors also allow you to highlight trailing whitespace when present. See for example:
Finally, one can check for trailing whitespace and report it, using the following CPAN modules:
Misusing String Eval
String eval allows one to compile and execute (possibly generated) strings as Perl expressions. While it is a powerful feature, there are usually better and safer ways to achieve what you want using string eval ""
. So you should only use it, if you are an expert and really know what you are doing.
Related to string eval, is using two or more /e
flags in the s///
substitution. While one /e flag is often useful (for example when substituting counters like in s/#\./($i++)."."/ge
) the second /e flags just evaluates the generated expression again. This can easily be done with using string eval inside the right-hand-side, assuming it is needed which is normally not the case.
Named Parameters That Start With Dash
If you're defining interfaces that accept a flattened hash or a hash reference of named parameters, there is no need to call the parameters with keys starting with a dash, like so:
# Bad code my $obj = MyClass->new( { -name => "George", -occupation => "carpenter", -city => "Inverness", } );
The dashes are not needed because Perl can safely escape and deal with plain names that only contain alphanumeric characters and underscores, and they just add clutter to the code. Named arguments starting with dashes were prevalent in some early modules such as Tk or Config-IniFiles, but they should not be used in more modern modules.
Instead, design your interfaces with calling conventions like so:
my $obj = MyClass->new( { name => "George", occupation => "carpenter", city => "Inverness", } );
Code and Markup Injection
Care must be taken when constructing statements that are passed to an interpreter, when putting arbitrary strings inside (using string interpolation or other methods). This is because if the strings are subject to input from the outside world (including the users), then one can use specially crafted strings for executing arbitrary commands and exploiting the system.
An example of this is outputting HTML using print "<p>" . $paragraph_text . "</p>\n";
which may allow inserting arbitrary, malicious, markup inside $paragraph_text
, which may include malicious JavaScript, that can steal passwords or alter the page’s contents.
For more information, see:
“Code/Markup Injection and Its Prevention” resource on this site.
Wikipedia articles about SQL injection and Cross-site scripting.
The site Bobby Tables about SQL injections.
Initializing Arrays and Hashes from Anonymous References
Some beginners to Perl are tempted to use the anonymous array reference constructor ([ … ]
) to initialise array variables, or alternatively anonymous hash references ({ … }
) to initialise hash variables, like so:
# Bad code my @arr = [1 .. 10]; my %uk_info = { continent => "Europe", capital => "London", };
However, these reference constructors actually create a single scalar that contains a reference and, as a result, in the case of the array, one will have a single element array, and in case of the hash, one will have an error with a hash that was initialised only with a single key (that was converted to a nonsensical string).
Array and hash variables should be initialized using lists enclosed in parentheses:
my @arr = (1 .. 100); my %uk_info = ( continent => "Europe", capital => "London", );
For more information about the difference between references and aggregate variables, refer to our references page.
Overly Long Lines in the Source Code
It is a good idea to avoid overly long lines in the source code, because they need to be scrolled to read, and may not fit within the margins of your co-developers’ text editors. If the lines are too long, you should break them or reformat them (for example, by adding a newline before or after an operator), and by breaking long string constants into several lines using the string concatenation operator - .
.
Many coding standards require lines to fit within 80 characters or 78 characters or so, and you should standardise on a similar limit for your own code.
Getting rid of special entries in directory contents
Calling readdir() repetitively, or calling it in list context will normally return the two special entries of .
(the same directory) and ..
(the parent directory) which should not be checked, and should normally be ignored. One can often find that people are trying to skip them in various sub-optimal ways:
# Bad code if ($dir_entry =~ m/\A\./) # Will skip all directories that start with dot. if ($dir_entry =~ m/^\./) # Same but \A is preferable for start-of-string. if ($dir_entry =~ m/\A\.\.?\z/) # Obfuscated. if ($dir_entry =~ m/\A\.{1,2}\z/) # Not much better. if ($dir_entry eq "." or $dir_entry eq "..") # May not be portable.
The best way to do that is to use File::Spec’s no_upwards()
function:
foreach my $entry (File::Spec->no_upwards(readdir($dir_handle)) { }
Note that Path-Tiny wraps that for you in its children()
function, and other file system abstraction modules provide similar functionality.
Assigning a List to a Scalar Variable
Normally, assigning from a function or an expression that returns a list to a scalar variable, will not yield what you want:
# Bad code my $characters = split(//, $string);
This will cause the list as returned by split to be evaluated in scalar context, and to return a single (and not very meaningful) scalar item. You normally want one of those:
my @characters = split(//, $string); my $chars_aref = [ split(//, $string) ]; my $num_chars = () = split(//, $string); # Use length instead in this case.
A lot of the confusion stems from the fact that people expect arrays in Perl to be contained directly in scalars. For more information about that, consult our page about references.
Regular Expressions starting or ending with “.*”
It is not necessary to put .*
or .*?
into the beginning or end of regular expressions to match something anywhere inside the string. So for example if ($hay_stack =~ /.*ab+c.*/)
can be replaced with the simpler: if ($hay_stack =~ /ab+c/)
. If you wish to match and extract the prefix, you should say (.*?)
or (.*)
.
Recursive Directory Traversal Without Using File::Find and Friends
Some beginners to Perl are tempted to write a recursive directory traversal (i.e: finding all files in a directory, its sub-directories, its sub-sub-directories, etc.) by using procedural recursion or other sub-optimal means. However, the idiomatic way is to use the core module File::Find or its CPAN friends. For more information, see our resources about directory traversal.
Using File::Find for listing the contents of a directory non-recursively
Alternatively, sometimes people are tempted to use File::Find or similar modules to non-recursively list the contents of a single directory. However, in this case, it is a better idea to simply use opendir(), readdir() and closedir(), in conjunction with no_upwards, or an abstraction of them.
File::Find and friends should be reserved for a recursive traversal.
Populating an Array with Multiple Copies of the Same Reference
You can sometimes see code like that:
# Bad code my @array_of_arrays = ([]) x $num_rows;
Or:
# Bad code my @row; my @array_of_rows; foreach my $elem (@existing_array) { @row = generate_row($elem); push @array_of_rows, \@row; }
The problem with code like this is that the same referent (see our resources about references in Perl) is being used in all places in the array, and so they will always be synchronised to the same contents.
As a result, the two code excerpts should be written as such instead:
my @array_of_arrays = map { [] } (1 .. $num_rows);
And:
my @array_of_rows; foreach my $elem (@existing_array) { my @row = generate_row($elem); push @array_of_rows, \@row; }
Or alternatively:
my @array_of_rows; foreach my $elem (@existing_array) { push @array_of_rows, [generate_row($elem)]; }
Scalar reference to a constant (e.g: "(\undef)")
This code - that uses map - will also generate a list of identical elements:
# Bad code my @array = (map { (\ undef ) } (0 .. 9));
A solution is to use "do {}":
my @array = (map { do { my $empty_var; \ $empty_var; } } (0 .. 9));
Conditional my declarations.
It is not a good idea to append a trailing if statement modifier to a declaration of a lexical variable using my
:
# Bad code my $var = VALUE() if (COND()); my ($var1, @array2) if (COND());
This code might compile and appear to run but you probably want to declare a lexical variable for the rest of its scope. If you need to assign to it conditionally, then do it in a separate statement:
my $var; if (COND()) { $var = VALUE(); }
Using One Variable for Two (or More) Different Purposes
Within the scope of its declaration, a variable should serve one purpose, and serve it well. One should not re-use a variable for a completely different purpose later on in the scope. Creating new variables is cheap in Perl and should not be a concern to avoid clarity.
Using \1 instead of $1 on the Right Hand Side of a Substitution
There is no good reason to use \1
, \2
, etc. in the right-hand-side of a substitution instead of $1
$2
etc. While this may work, the backslash-digits variables are aimed at back-references, such as matching the exact string of a capture again within the left hand side of a regular expression:
# Bad code $s =~ s/(H\w+)\s+(W\w+)/\1 [=] \2/;
Better code:
$s =~ s/(H\w+)\s+(W\w+)/$1 [=] $2/;
Appending using $array[$i++] = $value_to_append;
Some people are tempted to append elements into an array using:
# Bad code my $last_idx = 0; my @array; foreach ... { $array[$last_idx++] = $new_elem; }
However, it is better to use the http://perldoc.perl.org/functions/push.html built-in function, and so get rid of the explicit index, and make it less error prone:
my @array; foreach ... { push @array, $new_elem; }
Premature Optimisation
On various online Perl forums, we are often getting asked questions like: “What is the speediest way to do task X?” or “Which of these pieces of code will run faster?”. The answer is that in this day and age of extremely fast computers, you should optimise for clarity and modularity first, and worry about speed when and if you find it becomes a problem. Professor Don Knuth had this to say about it:
The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies.
There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of non-critical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
(Knuth reportedly attributed the exact quote it to C.A.R. Hoare).
While you should be conscious of efficiency, and the performance sanity of your code and algorithms when you write programs, excessive and premature micro-optimisations are probably not going to yield a major performance difference.
If you do find that your program runs too slowly, refer to our page about Optimising and Profiling Perl code.
Not Using Version Control
For everything except for short throwaway scripts, or otherwise incredibly short programs, there is no good excuse, not to use a version control system (a.k.a: "revision control systems", "source control systems", or more in general as part of "software configuration management"). This is especially true nowadays given the availability of several powerful, easy to use, open-source (and as a result free-of-charge), and cross-platform, version control systems, that you should have not a lot of problems to deploy, learn and use.
For more information and the motivation behind using version control systems, see the relevant section out of the fifth part of “Perl for Perl Newbies” for more discussion about the motivation behind that, some links and a demonstration.
Some links for further discussion:
The Free Version Control Systems Appendix of Producing Open Source Software.
The Wikipedia List of revision control software.
“You Must Hate Version Control Systems” - a discussion on Dave Cross’ blog about best practices in the software development industry.
Writing Automated Tests
Automated tests help verify that the code is working correctly, that bugs are not introduced due to refactoring or the addition of new feature, and also provide specifications and interface documentation to the code. As a result, automated tests have been considered a good practise for a long time.
For more information about how to write automated tests, see our page about quality assurance in Perl.
Using a Continuous Integration System
A continuous integration system builds every commit from the main version control repository’s branches and runs the associated automated tests, inside a mostly pristine environment which explicitly requires installing all the dependencies. It is a good idea to use such a system.
Services such as Travis CI and Appveyor allow one to test GitHub projects with relatively minimal hassle, and one can also set up their own continuous integration server using tools such as Jenkins.
How to Properly Autoflush
One can sometimes see people using $| = 1;
to perform autoflushing to their filehandles. However, this way is cryptic and is more errorprone then doing something like:
use IO::Handle; STDOUT->autoflush(1);
This makes the intent clearer and is clearer.
Conditional "use" statements
Some people are tempted to do something like that:
# Bad code if ($is_interactive) { use MyModule; }
However use is a compile-time statement and will always be performed if detected. As a result one should instead use the if module from CPAN, require or, less-preferably, string-eval with the call to use within.
Parsing XML/HTML/JSON/CSV/etc. using regular expressions
You should not try to parse HTML, XML, JSON, CSV, and other complex grammars using regular expressions. Instead, use a CPAN module. For more information see our page about Parsing in Perl.
Using most of the Perl punctuation variables from perlvar
perlvar mentions many punctuation variables, but they make the code hard to read, and most of them should be avoided, and have better alternatives.
Generating invalid Markup (of HTML/etc.)
You should make sure that the HTML markup you generate is valid HTML and that it validates as XHTML 1.0, HTML 4.01, HTML 5.0, or a different modern standard. For more information, see the “Designing for Compatibility” section in a previous talk.
Some bad code is:
# Bad code print <<'EOF'; <P> <FONT COLOR="red">Hello. <P> <FONT COLOR="green">Mr. Smith EOF
A better code would be:
print <<'EOF'; <p class="greeting"> Hello </p> <p class="name"> Mr. Smith </p> EOF
Capturing Instead of Clustering in Regular Expressions
If you want to group a certain sub-expression in a regular expression, without the need to capture it (into the $1
, $2
, $3
, etc. variables and related capture variables), then you should cluster them using (?: … )
instead of capturing them using a plain ( … )
, or alternatively not grouping them at all if it's needed. That is because using a cluster is faster and cleaner and better conveys your intentions than using a capture.
# Bad code if (my (undef, $match) = ($str =~ /\A(BA+)*\[([^\]]+)\]/)) { print "Found $match\n"; }
A better code would be:
if (my ($match) = ($str =~ /\A(?:BA+)*\[([^\]]+)\]/)) { print "Found $match\n"; }
Note that if you can afford to run your code only on perl 5.10.x and above, then you can use named captures.
Using Regex Captures Without Checking if the Match was Successful
It is tempting to perform a regular expression and use the capture variables - $1
, $2
, $3
, etc. without checking that the regular expression match was successful:
# Bad code $my_string =~ /\A(\w+) (\w+)/; my $first_name = $1; my $last_name = $2; print "Hello $first_name $last_name!";
However, if the regular expression match operation is not successful, then it is possible that these variables will remain with their values from the previous successful match operations, whatever they were.
A better code would be:
if ($my_string =~ /\A(\w+) (\w+)/) { my $first_name = $1; my $last_name = $2; print "Hello $first_name $last_name!"; } else { # Handle the error. }
Or better yet grab the matches from the regex return values directly:
if (my ($first_name, $last_name) = $my_string =~ /\A(\w+) (\w+)/) { print "Hello $first_name $last_name!"; } else { # Handle the error. }
Using select($file_handle)
One should not use http://perldoc.perl.org/functions/select.html’s select($file_handle)
syntax to set the “currently selected filehandle” because this will affect all subsequent uses by such functions as http://perldoc.perl.org/functions/print.html and is a sure-fire way to confuse the maintenance programmer. Instead, use IO::Handle and its methods. For example:
# Bad code my $old_fh = select(STDERR); $| = 1; # Set ->autoflush() select($old_fh);
should be instead written as:
use IO::Handle; STDERR->autoflush(1);
The second syntax of select()
which tries to see which file handles or sockets are active is of more valid use, but note that for a simple delay, one should use Time::HiRes and otherwise, there are some more efficient platform-specific methods to do that such as IO::Epoll for Linux or kqueue for FreeBSD, which are abstracted by event-driven programming frameworks on CPAN.
Non-Recommended Regular Expression-related Variables
The regular expression variables $&
(or $MATCH
in use English;
), $`
( $PREMATCH
), and $'
($POSTMATCH
) should normally not be used because using them incurs a penalty on every regular expression match. As an alternative, one can use:
$` ⇒ substr($var, 0, $-[0]) $& ⇒ substr($var, $-[0], $+[0] - $-[0]) $' ⇒ substr($var, $+[0])
Or alternatively use ${^MATCH}
, ${^PREMATCH}
, and ${^POSTMATCH}
from perl-5.10.0 and onwards (see perlvar for more information about those).
Furthermore a plain use English;
statement should be replaced with a use English qw( -no_match_vars ) ;
statement to avoid referencing these variables.
Not Packaging as CPAN-like Distributions
It is a very good idea for Perl code that you develop for in-house use, and place inside .pm
module files to be packaged as a set of CPAN-like distributions using the standard structure of lib/
, Makefile.PL
or Build.PL
, t/
etc. This will facilitate installing it, managing it, and testing it. If your code is just lying around the hard disk, it is much harder to deploy.
For more information see our page about CPAN.
Not Using a Bug Tracker/Issue Tracker
It is important to use a bug tracking system to maintain a list of bugs and issues that need to be fixed in your code, and of features that you'd like to work on. Sometimes, a simple file kept inside the version control system would be enough, but at other times, you should opt for a web-based bug tracker.
For more information, see:
“Bug Trackers” list on Shlomi Fish’s “Software Construction and Management Tools” page.
Unrelated packages inside modules
If your module is lib/MyModule.pm
, then it should only contain namespaces/packages under MyModule::
. If it contains package OtherModule;
then this in turn will be harder to find and confusing. Preferably, every package should be inside its own module (except for privately used ones).
Non-explicitly-imported symbols
When importing symbols from packages, it is a good idea to specify the symbols that are reported explicitly, so one won't have to wonder where variables are coming from. So:
# Bad code use MyLib; print my_function(3,5), "\n";
Should be replaced with:
use MyLib qw( my_function ); print my_function(3,5), "\n";
Or alternatively "()" for no imports:
use MyLib::Object (); print MyLib::Object->new({numbers => [3, 5, ]})->result(), "\n";
Excessive calls to subroutines in other packages
It is not a good idea to put many and excessive calls to subroutines in other packages (using the MyModule::my_sub()
notation) in your code, because it breaks encapsulation. Instead, make the module a class, instantiate it as an object and call its methods.
Using the Scalar Multidimensional Hashes Emulation
Please do not do the scalar multidimensional hashes emulation of perl:
# Bad code $myhash{$key1,$key2} = $my_value; $value = $myhash{$key1,$key2};
This is a relic of old versions of Perl and the proper way to do it is using nested hashes and references, or with an explicit serialisation of the keys. It also can be easily confused with a hash slice - @myhash{$key1,$key2}
Using the "-w" flag.
It is important not to pass the "-w" flag in the sha-bang line of the Perl program, because this turns on warnings globally and also interferes with the more modern use warnings;
statement. Unfortunately, many legacy tutorials and codebases still show that.
So please rewrite this:
# Bad code #!/usr/bin/env perl -w use strict;
Into this:
#!/usr/bin/env perl use strict; use warnings;
Also see our section about why the lack of use strict;
and use warnings;
is not recommended for more information.
“use Module qw($VERSION @IMPORTS)”
Perl will accept the following syntax for loading a particular version of a module along with some imports:
# Bad code use Getopt::Long qw(2.36 GetOptionsFromArray);
However, this syntax won't necessarily load the minimal version of the module, and tools such as Dist::Zilla won’t handle it properly. The proper way to do it is with the version number given right after the module name, and before the imports list, delimited by spaces on both sides:
#!/usr/bin/env perl use Getopt::Long 2.36 qw(GetOptionsFromArray);
This will work better.
Mutating Variables by referring to them again (e.g: “$x = $x + 1”)
Sometimes one can see code like that:
# Bad code $x = $x + 1; $my_hash{$my_key}{'field'} = $my_hash{$my_key}{'field'} . " more";
The problem with code like that is that the lvalue (the storage location value on the left) is repeated in the right side which makes it more prone to errors.
To overcome this Perl provides operators such as +=
, .=
or ++
and --
which avoid the repetition:
$x++; $my_hash{$my_key}{'field'} .= " more";
It is recommended to use them instead.
Using the "**" operator for integer power/exponentiation
The Perl 5 "**" exponentiation operator returns a floating-point result, even when “use integer;” is in effect. As a result, it should not be used for integer exponentiation/ raise-to-a-power.
Instead, one can use the exponentiation or “exp_mod” routines of various Perl 5 big integer modules, or implement them using routines such as:
sub exp_mod { my ($MOD, $base, $e) = @_; if ($e == 0) { return 1; } my $rec_p = exp_mod($MOD, $base, ($e >> 1)); my $ret = $rec_p * $rec_p; if ($e & 0x1) { ($ret %= $MOD) *= $base; } return ($ret % $MOD); }
TAP test suites that fail when run in parallel
As mentioned before, it is a good idea for your production code to have automated tests and Perl provides a facility for that using Test-Harness and various TAP emitting systems that can be written in almost any programming language. That put aside, Test-Harness supports a “-j9” flag (also available in the “HARNESS_OPTIONS” environment variable as “j9”) where “9” is the number of parallel processes, which allows to run the test scripts in parallel, in order to try to make the test suit run faster. As a result, one should make sure one's TAP-based test suite runs fine in parallel, and that the individual scripts do not step on each other’s toes.
One can make a wise use of File-Temp for that.
Using character classes (e.g "\d") with Unicode text
As mentioned in perlrecharclass the regular expression character class "\d" and similar character classes match more than the ASCII digits of 0 through 9. E.g:
# Bad code #!/usr/bin/env perl use strict; use warnings; use utf8; my $string = "۲۳۴"; if ( $string =~ /\A \d + \z/x ) { print "The string '$string' consists entirely of digits!\n"; } else { print "The string '$string' is not all digits!\n"; }
The match in this code succeeds! (As these are Eastern Arabic numerals.) To remedy this use [0-9]
inside regular expressions and similar sets of characters for other character classes.
#!/usr/bin/env perl use strict; use warnings; use utf8; my $string = "۲۳۴"; if ( $string =~ /\A [0-9] + \z/x ) { print "The string '$string' consists entirely of digits!\n"; } else { print "The string '$string' is not all digits!\n"; }
Using empty regular expressions patterns
If you write code like if ($string =~ /$pattern/) { … }
and $pattern
happens to be the empty string, then it will apply the last successful match on $string
rather than always return true (due to using the empty pattern).
One can fix the problematic behaviour by writing (length($pattern) ? ($string =~ /$pattern/) : 1)
or ($string =~ /(?:)$pattern/)
.
For more information, see the discussion at perl 5 issue #17577.
Filenames with case collisions
One should not use filenames for two or more files (or directories) which are the same except for letter case / capitalization such as readme.txt
and ReadMe.txt
, because some file systems are case insensitive.
One can use File-Find-CaseCollide or similar to test for them.
"plan skip_all" with "use Test::More tests => [count];"
If you write Test::More code like:
# Bad code #!/usr/bin/env perl use strict; use warnings; use Test::More tests => 1; eval "use Uninstalled::Module;"; if ($@) { plan 'skip_all' => "Failed to load Uninstalled::Module . Skipping."; } # TEST pass("passing");
Then running it will give:
$ prove t/uninstalled.t You tried to plan twice at t/uninstalled.t line 9. # Looks like your test exited with 2 before it could output anything. t/uninstalled.t .. Dubious, test returned 2 (wstat 512, 0x200) Failed 1/1 subtests Test Summary Report ------------------- t/uninstalled.t (Wstat: 512 Tests: 0 Failed: 0) Non-zero exit status: 2 Parse errors: Bad plan. You planned 1 tests but ran 0. Files=1, Tests=0, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.05 cusr 0.00 csys = 0.08 CPU) Result: FAIL
Instead, one should load Test::More using use Test::More;
and use plan tests =>
in a separate conditional's branch than the skip_all
one.
#!/usr/bin/env perl use strict; use warnings; use Test::More; eval "use Test::Trap;"; if ($@) { plan 'skip_all' => "Failed to load Test::Trap . Skipping."; } else { plan tests => 1; }
split()'s third "limit" argument
When http://perldoc.perl.org/functions/split.html's third argument is omitted, it trims empty end string results:
#! /usr/bin/env perl # # Short description for split-demo.pl # use strict; use warnings; use 5.014; use autodie; use Data::Dumper qw / Dumper /; my $string = '/why/hello/there///'; say Dumper( [ split m#/#, $string ] ); say Dumper( [ split m#/#, $string, -1 ] );
This prints:
shlomif[perl-begin]:$trunk$ perl bin/split-demo.pl $VAR1 = [ '', 'why', 'hello', 'there' ]; $VAR1 = [ '', 'why', 'hello', 'there', '', '', '' ];
So using "-1" as the third argument may or may not be what you want.
Standalone XML tags in HTML
Some HTML parsers will think certain standalone XHTML tags, such as <iframe…/>
are only an opening tag rather than an opening tag followed by a closing tag ( <iframe…></iframe>
).
Sources of This Advice
This is a short list of the sources from which this advice was taken, and which also contain material for further reading:
The Book "Perl Best Practices" by Damian Conway - contains a lot of good advice and food for thought, but sometimes should be deviated from. Also see the "PBP Module Recommendation Commentary" on the Perl 5 Wiki.
"Ancient Perl" on the Perl 5 Wiki.
The book Refactoring by Martin Fowler - not particularly about Perl, but still useful.
The book The Pragmatic Programmer: From Journeyman to Master - also not particularly about Perl, and I found it somewhat disappointing, but it is an informative book.
The list “How to tell if a FLOSS project is doomed to FAIL”.
Advice given by people on Freenode's #perl channel, on the Perl Beginners mailing list, and on other Perl forums.