All the Unix shells (that I know of) implement shell pipelines via something else than a pty
(typically, they use Unix pipes!-); therefore, the C/C++ runtime library in cpp_program
will KNOW its output is NOT a terminal, and therefore it WILL buffer the output (in chunks of a few KB at a time). Unless you write your own shell (or semiquasimaybeshelloid) that implements pipelines via pyt's, I believe there is no way to do what you require using pipeline notation.
The "shelloid" thing in question might be written in Python (or in C, or Tcl, or...), using the pty
module of the standard library or higher-level abstraction based on it such as pexpect, and the fact that the two programs to be connected via a "pty-based pipeline" are written in C++ and Python is pretty irrelevant. The key idea is to trick the program to the left of the pipe into believing its stdout is a terminal (that's why a pty must be at the root of the trick) to fool its runtime library into NOT buffering output. Once you have written such a shelloid, you'd call it with some syntax such as:
$ shelloid 'cpp_program | python_program.py'
Of course it would be easier to provide a "point solution" by writing python_program
in the knowledge that it must spawn cpp_program
as a sub-process AND trick it into believing its stdout is a terminal (i.e., python_program
would then directly use pexpect
, for example). But if you have a million of such situations where you want to defeat the normal buffering performed by the system-provided C runtime library, or many cases in which you want to reuse existing filters, etc, writing shelloid
might actually be preferable.