C++ question

stevech

Well-known member
What does this do, in a cpp file?

Code:
class xyz;

where xyz is defined in on of the include files.

It doesn't create an instance of xyz.

confused
 
It defines a forward reference to a class. The main use is to allow you to declare pointers to that class, without having to specify the class (in C and C++, all class/struct/union pointers must have the same size and format, to allow these forward references).

I wrote the C front end for a C hostile machine (Data General MV/Eclipse) in the 1980's that had different types of pointers (pointers to character had a different format than pointers to word items -- in the DG case, the pointers were the same size but the format was different, but for other computers like PR1ME, the character pointer was a different size than the word pointer). Due to this rule, I had to make all struct/union pointers be word pointers, and the struct/union be word aligned, even if the structure only had chars in it.
 
Thanks. Got it.

In my early years, I did lots of assembly language on the Data General Nova 800. It had "indirection" addresses which were any operand-address with the most significant bit set. (I never hear the term pointer back then). You could create an infinite loop with a self-referencing address or a a loop.

In a really old computer I worked with (military), there was magnetic core memory. If you did a jump ., it overheated the core and either broke the hardware or caused a bit flip and a wild jump

Long way from these to OOP and bytecode VMs like Java and C# use.
 
Last edited:
When byte addressing was added to the Eclipse computers (16-bit follow on to the Nova), they formed the byte pointer by shifting left the word pointer 1 bit, losing the indirection bit. The bottom bit was used to indicate which byte in a 16-bit word was referred to. So you had instructions that do 16-bit (and later 32-bit in the MV/Eclipse) loads and stores, using AC2 or AC3 as the index pointers, and separate instructions that did a single byte read/write (using any of the 4 registers as an index register). In the pre-prototype era, most C programmers would freely pass pointers around, not worrying about getting the types correct, and porting to the DG/Eclipse tended to be a painful process. I did have several customers who thanked us for having such a strict C implementation in that it forced them to be more type correct, but most weren't as happy.

Besides new instructions, one of the changes in the Eclipse was it had the ability to issue a hardware fault if an indirection chain had more than 15 elements before ending the indirection.
 
After the Nova 800, I did lots of asm on DEC PDP-11s. I think they too had that byte addressing scheme as in Novas. I really liked the PDP-11's architecture. Classic CISC. I wish we still had a focus on CISC instead of RISC, for many reasons. But, I suppose the real driver for RISC was chip cost and better time to market.

Mike.. if you're of a mind to, I still struggle with C++ and the use of "&" for object references. If I omit it, the compiler seems to be OK as it knows that the type of the symbol is an object and its a class. I have a hard time forgetting how I've used & in C for decades. But I still waste a lot of time (reading poorly written C++ books) and what credible stuff I can find with web searches, on this topic. Either a paragraph or so here, or a referral to a well written C++ reference. So many books these days are learn by example, not learn the theory, and the examples are often terse and oversimplified. I do a lot of C++ but there are a few things that just don't come naturally now. Just like a few words that I just cannot spell, as old as I am!
 
I joined DG in 1979, and I didn't do much work on the Novas (though DG/L still supported Novas), mostly I worked on the Eclipse, MV/Eclipse, and finally finishing the 88000 port of GCC for the AViiON. So perhaps the later generations of Novas had the byte addressing.

I'm not as up on C++ as I am/was in C. Since I've been doing backend work since about 1988, it is somebody else's problem to do the parsing, and build the internal RTL for the code. If you have this:

Code:
struct foo { int a[200]; };
void bar (struct foo a) { ... }

In C, it would pass a copy of the structure. On some machines, it would copy the structure to the stack and pass a pointer. In other machines, it would pass all 200 integers on the stack. Because a local copy is made, no changes to the value will be passed back to the caller.

In C++, if you use '&', it passes a pointer to the object and any changes to the object will be mirrored in the original variable.

Code:
// C style semantics
struct foo { int a[200]; } s;
void bar (struct foo *p}
{
  p->a[0] = 1;
}

// C++ semantics
void bar2 (struct foo &p)
{
  p.a[1] = 2;
}

void call (void) {
  bar (&s);
  bar2 (s);
}

// after call, s[0] == 1, and s[1] == 2.

Note in the C syntax, you have to manually do the de-reference, while using the & in the declaration means the compiler will do this behind your back. It gives you a way to mimic other langauges that pass by reference.
 
Note in the C syntax, you have to manually do the de-reference, while using the & in the declaration means the compiler will do this behind your back. It gives you a way to mimic other langauges that pass by reference.
I get hosed up in C++ with & on when to use object dot vs. object ->
I know how to do so in C as there's no secret sauce in the compiler as there is in C++.
The jargon of object reference using same & that my brain is wired to think of as "address of" from C. We always pass objects (structs, some scalars) in C by reference via &. Why did C++ invent the term "reference"? Just so the compiler can dereference with object . and not -> ??
 
Last edited:
I don't know, but given C is just about the only compiled language that passes everything by value, I suspect it was more to help other programmers used to more traditional languages move their code to C++. The & reference operator looks like it was added fairly early in 1983, back when it was called C with classes instead of C++: http://www.cplusplus.com/info/history/
 
A big difference between C++ references and C++ pointers is that the reference itself (not necessarily the referent) is immutable and the referent cannot be undefined unlike a pointer to null. C++ tried to bring some discipline to object creation and destruction without sacrificing performance. Giving the compiler a bit more information about a variable allows a bit more optimization.

As a point of style, if either a pointer or a reference would work, prefer the reference because it gives both the person reading the code and the compiler more information.
 
noun
Linguistics
noun: referent; plural noun: referents

the thing that a word or phrase denotes or stands for.
"“the Morning Star” and “the Evening Star” have the same referent (the planet Venus)"

I bolstered up and spent some time reading pointer vs. reference and how in C++ the & changes meaning when it is used in a declaration vs. an expression. I'm getting it now.

this is a good one
http://www.ntu.edu.sg/home/ehchua/programming/cpp/cp4_PointerReference.html
 
Note this is an old thread, and I am guessing that @punit might be spam-ish post, but not sure...
 
Back
Top