You did not explain why the existing perl script can't be run directly from the Perl/CGI script ? This seems the easiest path to me, as it does not involve creating another communication channel:
client ⇒ apache ⇒ your CGIzed existing script ⇒ embedded system
as opposed to:
client ⇒ apache ⇒ new CGI script ⇒ existing script ⇒ embedded system
I assume the reason is that you expect the CGI script to run multiple times simultaneously, and the embedded system is not able to handle multiple connections.
But even in this case, having a daemon for the only purpose of serialization seems overkill. You can use a lock in the CGI script, to protect the critical communication code, as in:
open(my $lock, ">", $lockfilename);
flock($lock, LOCK_EX);
... critical code...
flock($lock, LOCK_UN);
Note that the "crital code" part could either embed your existing script, or execute it.
If, despite all this, you still want to separate the CGI from the command daemon, here are templates for the client and server parts of socket-based communication. First, the client, which is part of the CGI:
use IO::Socket::INET;
$sock = IO::Socket::INET->new('localhost:9000');
print $sock "Comand\n";
$result = <$sock>;
... do something with the result ...
$sock->close;
And this is the main daemon loop:
package MyServer;
use base Net::Server;
sub process_request {
my $self = shift;
while (<STDIN>) {
chomp;
print "This is my answer to the command '$_'.\r\n";
}
}
MyServer->run(port => 160);