There is no need to go to external tools for such a common task. It involves starting a shell and yet another program, and (doubly) escaping things just right; it is error prone and far less efficient, and inferior in terms of error checking. Why not use Perl in a Perl program?
Read a file and write its lines over to a new file, skipping the ones you don't want. See this post for details, for example.
Here is a quick way using Path::Tiny
use warnings;
use strict;
use Path::Tiny;
my $file = '...';
my $new_file = '...';
my @new_lines = grep { not /utc\s*$/ } path($file)->lines;
path($new_file)->spew(@new_lines);
The module's path($file) opens the file and lines returns the list of lines; they are filtered by grep and those that don't end in utc (with possible trailing space) are assigned to @new_lines.
Then the spew method writes those lines to $new_file.
For a couple of (other) ways to "edit" a file using this module see this post.
In a one-liner
perl -ne'print if not /utc\s*$/' file > new_file
A direct answer may best illustrate (some of) the disadvantages of using external commands.
We need to pass to grep, via shell, particular sequences which would be interpreted by either or both Perl and shell; so they need be escaped correctly
system("grep -v 'utc\\s*\$' $old_file > $new_file");
This works on my system.