GCC's instrumentation and the target environment

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

GCC's instrumentation and the target environment

David Taylor-5
I wish to use GCC based instrumentation on an embedded target.  And I
am finding that GCC's libgcov.a is not well suited to my needs.

Ideally, all the application entry points and everthing that knows
about the internals of the implementation would be in separate files
from everything that does i/o or otherwise uses 'system services'.

Right now GCC has libgcov-driver.c which includes both gcov-io.c and
libgcov-driver-system.c.

What I'd like is a stable API between the routines that 'collect' the
data and the routines that do the i/o.  With the i/o routines being
non-static and in a separate file from the others that is not
#include'd.

I want them to be replaceable by the application.  Depending upon
circumstances I can imagine the routines doing network i/o, disk i/o,
or using a serial port.

I want one version of libgcov.a for all three with three different
sets of i/o routines that I can build into the application.  If the
internals of instrumentation changes, I want to not have to change the
i/o routines or anything in the application.

If you think of it in disk driver terms, some of the routines in
libgcov.a provide a DDI -- an interface of routines that the
application call call.  For applications that exit, one of the
routines is called at program exit.  For long running applications,
there are routines in the DDI to dump and flush the accumulated
information.

And the i/o routines can be thought of as providing a DKI -- what the
library libgcov.a expects of the environment -- for example, fopen and
fwrite.

There's also the inhibit_libc define.  While if you don't have headers
you might have a hard time including <stdio.h> or some of the other
header files, if the environment has a way of doing i/o or saving the
results, there is no real reason why it should not be possible to
provide instrumentation.

Comments?
Reply | Threaded
Open this post in threaded view
|

Re: GCC's instrumentation and the target environment

Martin Liška-2
On 11/1/19 7:13 PM, David Taylor wrote:
> I wish to use GCC based instrumentation on an embedded target.  And I
> am finding that GCC's libgcov.a is not well suited to my needs.
>
> Ideally, all the application entry points and everthing that knows
> about the internals of the implementation would be in separate files
> from everything that does i/o or otherwise uses 'system services'.
>
> Right now GCC has libgcov-driver.c which includes both gcov-io.c and
> libgcov-driver-system.c.

Hello.

>
> What I'd like is a stable API between the routines that 'collect' the
> data and the routines that do the i/o.  With the i/o routines being
> non-static and in a separate file from the others that is not
> #include'd.
>
> I want them to be replaceable by the application.  Depending upon
> circumstances I can imagine the routines doing network i/o, disk i/o,
> or using a serial port.

What's difference in between i/o and disk i/o? What about using a NFS
file system into which you can save the data (via -fprofile-dir=/mnt/mynfs/...)?

I can imagine dump into stderr for example. That can be quite easily doable.

Martin

>
> I want one version of libgcov.a for all three with three different
> sets of i/o routines that I can build into the application.  If the
> internals of instrumentation changes, I want to not have to change the
> i/o routines or anything in the application.
>
> If you think of it in disk driver terms, some of the routines in
> libgcov.a provide a DDI -- an interface of routines that the
> application call call.  For applications that exit, one of the
> routines is called at program exit.  For long running applications,
> there are routines in the DDI to dump and flush the accumulated
> information.
>
> And the i/o routines can be thought of as providing a DKI -- what the
> library libgcov.a expects of the environment -- for example, fopen and
> fwrite.
>
> There's also the inhibit_libc define.  While if you don't have headers
> you might have a hard time including <stdio.h> or some of the other
> header files, if the environment has a way of doing i/o or saving the
> results, there is no real reason why it should not be possible to
> provide instrumentation.
>
> Comments?
>

Reply | Threaded
Open this post in threaded view
|

RE: GCC's instrumentation and the target environment

David.Taylor
> From: Martin Liška <[hidden email]>
> Sent: Monday, November 4, 2019 4:20 AM
> To: taylor, david; [hidden email]
> Subject: Re: GCC's instrumentation and the target environment

> On 11/1/19 7:13 PM, David Taylor wrote:

> Hello.

Hello.

> > What I'd like is a stable API between the routines that 'collect' the
> > data and the routines that do the i/o.  With the i/o routines being
> > non-static and in a separate file from the others that is not
> > #include'd.
> >
> > I want them to be replaceable by the application.  Depending upon
> > circumstances I can imagine the routines doing network i/o, disk i/o,
> > or using a serial port.
>
> What's difference in between i/o and disk i/o? What about using a NFS file
> system into which you can save the data (via -fprofile-dir=/mnt/mynfs/...)?

I/O encompasses more than just reading and writing a file in a file system.
Depending on the embedded target you might not have the ability to NFS mount.
You might not even have a file system accessible to instrumentation.

By network I/O I'm thinking sockets.  There's some code possibly run at 'boot' time or possibly run during the first __gcov_open that establishes a network connection with
a process running on another system.  There's some protocol, agreed to by the
application and remote process, for communicating the data collected and which
file it belongs to.

By serial I/O, I'm thinking of a serial port.

Hopefully that is clearer.

> I can imagine dump into stderr for example. That can be quite easily doable.

I don't think that the current implementation would make that easy.  For us there
are potentially over a thousand files being instrumented.  You need to communicate
which file the data belongs to.  Whether it is via stderr, a serial port, or a network
connection, the file name needs to be in the stream and there needs to be a way
of determining when one file ends and the next one begins.

For us, stderr and stdout, when defined, are used for communicating
status and extraordinary events.  And not well suited for transferring instrumentation
data.

> Martin

David

Reply | Threaded
Open this post in threaded view
|

Re: GCC's instrumentation and the target environment

Joel Sherrill <joel.sherrill@OARcorp.com>-6
On Mon, Nov 4, 2019 at 7:06 AM <[hidden email]> wrote:

> > From: Martin Liška <[hidden email]>
> > Sent: Monday, November 4, 2019 4:20 AM
> > To: taylor, david; [hidden email]
> > Subject: Re: GCC's instrumentation and the target environment
>
> > On 11/1/19 7:13 PM, David Taylor wrote:
>
> > Hello.
>
> Hello.
>
> > > What I'd like is a stable API between the routines that 'collect' the
> > > data and the routines that do the i/o.  With the i/o routines being
> > > non-static and in a separate file from the others that is not
> > > #include'd.
> > >
> > > I want them to be replaceable by the application.  Depending upon
> > > circumstances I can imagine the routines doing network i/o, disk i/o,
> > > or using a serial port.
> >
> > What's difference in between i/o and disk i/o? What about using a NFS
> file
> > system into which you can save the data (via
> -fprofile-dir=/mnt/mynfs/...)?
>
> I/O encompasses more than just reading and writing a file in a file system.
> Depending on the embedded target you might not have the ability to NFS
> mount.
> You might not even have a file system accessible to instrumentation.
>
> By network I/O I'm thinking sockets.  There's some code possibly run at
> 'boot' time or possibly run during the first __gcov_open that establishes a
> network connection with
> a process running on another system.  There's some protocol, agreed to by
> the
> application and remote process, for communicating the data collected and
> which
> file it belongs to.
>
> By serial I/O, I'm thinking of a serial port.
>
> Hopefully that is clearer.
>
> > I can imagine dump into stderr for example. That can be quite easily
> doable.
>
> I don't think that the current implementation would make that easy.  For
> us there
> are potentially over a thousand files being instrumented.  You need to
> communicate
> which file the data belongs to.  Whether it is via stderr, a serial port,
> or a network
> connection, the file name needs to be in the stream and there needs to be
> a way
> of determining when one file ends and the next one begins.
>
> For us, stderr and stdout, when defined, are used for communicating
> status and extraordinary events.  And not well suited for transferring
> instrumentation
> data.
>

And I generally agree with that statement but I am also on a project
evaluating the
use of a commercial tool which does coverage and includes MCDC analysis. It
has a very flexible plugin for this specific purpose. You can dump in any
format
you can decode to any output destination. They have many standard
implementations
and plenty of examples you can tailor.

It wouldn't be terribly difficult to multiplex the console and filter it.

I would suggest consideration for dumping into a buffer and having an
external
agent (e.g. debugger, JTAG based program, etc) retrieve it.

RTEMS programs generally don't exit and often have no networking. You have
to
have flexibility. No one is forcing a singular output media -- just
flexibility.

<hint> I'd love to see decision and MCDC coverage support </hint>.

--joel


>
> > Martin
>
> David
>
>
Reply | Threaded
Open this post in threaded view
|

Re: GCC's instrumentation and the target environment

Martin Liška-2
In reply to this post by David.Taylor
On 11/4/19 2:06 PM, [hidden email] wrote:

>> From: Martin Liška <[hidden email]>
>> Sent: Monday, November 4, 2019 4:20 AM
>> To: taylor, david; [hidden email]
>> Subject: Re: GCC's instrumentation and the target environment
>
>> On 11/1/19 7:13 PM, David Taylor wrote:
>
>> Hello.
>
> Hello.
>
>>> What I'd like is a stable API between the routines that 'collect' the
>>> data and the routines that do the i/o.  With the i/o routines being
>>> non-static and in a separate file from the others that is not
>>> #include'd.
>>>
>>> I want them to be replaceable by the application.  Depending upon
>>> circumstances I can imagine the routines doing network i/o, disk i/o,
>>> or using a serial port.
>>
>> What's difference in between i/o and disk i/o? What about using a NFS file
>> system into which you can save the data (via -fprofile-dir=/mnt/mynfs/...)?
>
> I/O encompasses more than just reading and writing a file in a file system.
> Depending on the embedded target you might not have the ability to NFS mount.
> You might not even have a file system accessible to instrumentation.
>
> By network I/O I'm thinking sockets.  There's some code possibly run at 'boot' time or possibly run during the first __gcov_open that establishes a network connection with
> a process running on another system.  There's some protocol, agreed to by the
> application and remote process, for communicating the data collected and which
> file it belongs to.
>
> By serial I/O, I'm thinking of a serial port.

Hello.

I see your needs. I would recommend to come up with patches that will enable such
a communication channel. I can review the patches or help you with obstacles.

Martin

>
> Hopefully that is clearer.
>
>> I can imagine dump into stderr for example. That can be quite easily doable.
>
> I don't think that the current implementation would make that easy.  For us there
> are potentially over a thousand files being instrumented.  You need to communicate
> which file the data belongs to.  Whether it is via stderr, a serial port, or a network
> connection, the file name needs to be in the stream and there needs to be a way
> of determining when one file ends and the next one begins.
>
> For us, stderr and stdout, when defined, are used for communicating
> status and extraordinary events.  And not well suited for transferring instrumentation
> data.
>
>> Martin
>
> David
>