• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

C/C++ vs. Java vs. C#

And what he is saying is that there's no reason to code it as above. Instead you should code:

No. i did not say that anyone _should_ code that way.

I said one doesn't need to give names, and given an example that shows how that should look. Maybe i wasn't really clear about that. If so, my fault. Keep in mind that english isn't my first language, me being a Kraut ;)

Said that, indeed i fully agree with what roger said:

I think that is a terrible practice. For several reasons.

Instead i wanted to point out that the compiler itself should never, ever, care about the _names_ you give in the declaration vs. the _names_ you use in the implementation. The only thing that should matter is the _order_ of the arguments.

If, however, the compiler really borks up when the names are swapped/changed, then it really is a compiler bug, IMHO.

And yes, the naming should indeed be consistent between declaration and implementation.

I fully agree that it should be obvious by looking at the declaration in the header file what a given function takes as arguments. Code should be self-explanatory as much as possible. And yes, no sense in writing one line of declaration preceded by 20 lines of "documentation".

Whoever came up with the idea that you should write the code docu into the code should be punished with reading such code and explaining it for at least 5 years. Doxygen is nice to get an initial draft for source-code documentation, but way too many people overuse it.

Sorry for the derail into finer details of coding, instead of staying on-rail as to the differencies/merrits between the languages in the OP

Greetings,

Chris
 
Well, "back then" we didn't care much about encryption, key-exchange, challenge-response methods, and the like. Also, we did not do much video or image editing on computers anyway.

Actually, we did. Or you did, I wasn't around back in the good old days(TM). But there's a scary amount of literature dating to quite a while back about the ins and outs of random number generators and pseudo-random number generators.

This ties in quite well to the overall discussion as well. Sometimes, you don't care wether your RNG is cryptographically secure, you don't care if it has a normal distribution, you just want it to spit out different values. This is fundamental to the UNIX philosophy, actually - simplicity over correctness.

And sometimes, you do. Sometimes, you want to make sure you're getting your numbers from your /dev/random bit bucket on Linux, not your Yarrow algorithm BSD /dev/random. But then, you'll have to expect more complexity. That's why we have different tools for different problems (again, a UNIX principle - do one thing only and do it well), not one all-purpose programming language.

Of course, everything is possible with just a hammer, goodwill, and a bigger hammer. So long as you're driving a Landy.
 
I also very,very strongly disagree with the 'should' in your post. I think that is a terrible practice. For several reasons. With which I will now bore you with :)

Declarations are often all you get to see with commercial code where source code is not provided. Declarations should be extremely readable. To me, readable means sufficient, but no more than necessary comments.

I sat looking at the word should befre hitting submit on that post. I decided to leave it for the more interesting conversation is was likely to spawn.

Sad to say, you can't bore me with code talk. :o

Idealy, I don't want to look at a header. I look at the document in hand that defines whatever's in the header.

If I don't have one, and will be spending a week+ dealing with something, I'll make the document, hopefully in a few hours. Then use it thereafter. That document should exist and be up to date for any significant piece of code.

I like a naked header with a fully doc'd block in front of the implementation and a document in hand that summarizes the comment blocks.
 
Last edited:
No. i did not say that anyone _should_ code that way.

I said one doesn't need to give names, and given an example that shows how that should look. Maybe i wasn't really clear about that. If so, my fault. Keep in mind that english isn't my first language, me being a Kraut ;)

I certainly didn't intend any words in mouth action here. I was speaking for myself, although I certainly didn't word it that way on either end of my sentance. ;)
 
Idealy, I don't want to look at a header. I look at the document in hand that defines whatever's in the header.
I guess that's partly a philosophical issue, and partly a question of how your brain works.

Me, I can remember pretty much functions,and what they do. And I hate printed documentation, and electronic documentation just a bit less, for use while coding. When I wanted to learn OpenGL, for example, I read the Reference Guide and the User's guide (the blue and red books) cover to cover several times. I can't imagine trying to lean OpenGL via API documentation. At the end of that I know I can call, say, Vertex3d, and the like, and what they do. But, what are the parameters to some of these things, and in what order? My brain is such that I'll never remember. I remember big picture stuff, but details escape me. I think a lot of programmers excel at remembering details, and thus a function like int foo (int, int, int) is perfectly readable. Me, i'm scratching my head! So, your method requires me to constantly be turning pages in a book, or searching in online documentation, having several windows open, refering back and forth, just to get the parameters to a function call right. (edit: not to mention the ability to just cut and paste from the header, and then just overtype the parameters with the variables/values you want).

On the other hand, in Visual Studio, if foo is declared Seconds foo (int distance, int speed), as soon as I type 'foo (' a little ballon pops up showing me that the first parameter is distance, not just 'int'. It also shows me the return type is Seconds. But even without that feature, having a being able to refer to a header file makes it easy for me to remember function names and parameter order with the mininum of fuss and page flipping. That equals not just speed, but accuracy. Moving eyes back and forth from book page to screen increases the odds of getting something wrong.

But all that depends on my coding style. I hate the Windows API, where a simply named function like CreateFile can do things like: create a file, create a socket, create memory (memory mapped files with no physical file), etc. It does a dozen things. No one could reasonably use that without written documentation. To me that is horrible design, though there is a pretty good book out there describing how some of this came to be. In my world a function does one thing, and you generally don't use parameters to control 15 different modes. The way OpenGL handles state based programming is much better to my point of view, even if it is more verbose than the Microsoft way of handling things. 10 function calls to set the state, vs 10 parameters in a single function, all with multiple interpretations depending on what the other parameters are set to. You want documentation for all your code, and I want my code so singular in purpose that only the big picture stuff needs documentation at all.

To bring this back to language (sorry wowbagger), C++ allows you to code things like this how you want - you aren't forced into a specific mode of expression. This is incredibly important in large programs where it usually makes sense to express yourself in several different modes (OO for GUI, functional for algorithmic work, templates for generic programming, etc). But to do so without creating a mess, style becomes incredibly important. It's not just style, it's about communicating to later coders exactly what you were thinking. If the language lets you hang yourself with parameter ordering, your style better make it hard to make a mistake, and take advantage of whatever IDE tools you have (like Visual Studio's autocomplete).
 
Last edited:
It's a great pity C++ was not adopted whole-heartedly, but at the time he was developing Linux C++ was still teething so it was perhaps a justifiable call.

That was a discussion regarding not the Linux kernel, but git - started in 2005.
 
I know these questions will never be resolved, but I wonder how much of the cause of this is people not trying to use the language as intended. For example, I love lisp, but wouldn't use it in my work, and anyone who tried to use it like C would be very, very unhappy. I used to be an Ada programmer, but switched to C++ due to the job. At first I tried to use C++ like Ada, and I was very unhappy with the language. For me the turning point was I was developing a library of heuristic algorithms in both Ada and C++. While Ada has generics, they are severly restricted compared to C++. The restrictions are there to 'protect you'. You can make an unholy mess with templates in C++; it's harder to do so in Ada. OTOH, C++ allows expressing things you just cannot in Ada. Try making a vector class in Ada that accepts int, floats, and structs. You can't (well, Ada83, I haven't used the newer Adas). That protects you if you wrote the code to only work with integer types, but if you are smart enough to handle multiple types, you can create enormously useful things. Ada thinks it should protect you from yourself, C++ thinks you should be professional enough to decide what the limits should be. In any case, I stopped using C++ as a bastard version of Ada, and recognized that C++'s design was for a reason. Type safety has advantages and limitations. Pick your poison according to your needs. In any case, even today you can read articles in the Ada literature extolling how much safer it is, using craptackular C/C++ as a point of comparison. In my experience, I write equally safe Ada and C++ (where safe to means human critical applications - bad code equals somebody gets hurt). A dumbass writing naive and "hacky" code would probably generate safer Ada than C++.

So, I spurn Java somewhat, but then I've never really used it super seriously for what it is intended for.
 
I had another (not original) thought - translation vs fluency. In natural languages, for awhile while you are learning you think of what you want to say, then translate it to the language you are trying to learn. After awhile, you start thinking in the language - you are no longer translating. I often see what seems like a similar thing in a lot of code I see. Not so much, say, translating C to Java (though you will certainly see that), but more translating what the person is thinking about the problem they are trying to solve into the language they are using. I consider C++ a native language to me - just as native as English, if not more so. I think in C++ (when programming). I get the sense that that is probably a bit overstated, but I think you'll see what I mean if you don't take it too too literally. To make a tortured analogy, English allows you to create horrendous sentence structures, but on average we speak grammatically and clearly based on our socioeconomic status. Esperanto and other invented languages are far superior than most natural languages as far as logic and avoiding pitfalls go, yet a fluent speaker really doesn't encounter the problems the language inventors worry about. run/ran vs going/went only trips up the learner. pointers vs references, functional vs OO, etc., only trips up the learner of C++. In any case, what I'm trying to say is I don't think in some kind of pseudocode or something when I'm designing, only to tortuously try to express it in C++. My design documents are mostly C++, with a small number of diagrams and words thrown in. It's how I think about a problem, for better and for worse (worse: a language limits you to thinking about things that are easy to express in that language.) it's hard to express in words, but I think most intense coders know exactly what I mean.

It's a tortured comparison, I wouldn't push it too far, but I think it sums up how I feel when somebody starts talking about all the troubles C++ has. I just don't encounter those problems, and I wonder what the fuss is about. Heck, I still use GOTO very occasionally (it's great when writing device handlers, and you are deep inside a bunch of conditional statements. It's even better when you write code generators, as I sometimes do). If you are fluent, a lot of the supposed problems disappear. English has the 'problem' of double negative, but all but an Aspie-type literalist will understand the statement "I ain't got no money to give you!" C++ has the 'problem' of pointers and unchecked casts, a problem that never materializes if you use them appropriately, and use other features of the language intended to avoid the real problems of pointers and unchecked casts.
 
Last edited:
Roger,

well said. Pretty much the same here when i'm coding something.

Greetings,

Chris

P.S.: Do you think you could make smaller paragraphs? It's sometimes hard to read such almost-wall-of-text's. Just saying.
 
I had another (not original) thought - translation vs fluency. In natural languages, for awhile while you are learning you think of what you want to say, then translate it to the language you are trying to learn. After awhile, you start thinking in the language - you are no longer translating. I often see what seems like a similar thing in a lot of code I see. Not so much, say, translating C to Java (though you will certainly see that), but more translating what the person is thinking about the problem they are trying to solve into the language they are using. I consider C++ a native language to me - just as native as English, if not more so. I think in C++ (when programming). I get the sense that that is probably a bit overstated . . .

Actually, I don't think you're overstating it a bit. Once I'm comfortable in a programming language (and the identity of that language has occasionally changed - Fortran IV, Commodore Basic, Basica, Turbo Pascal, C++, Delphi, C#. I've learned many others - PL/C, Algol, Lisp, Forth, Python, tcl, etc - without getting fluent), I really do think in it. And when I'm trying to learn a new language, at first I'm just trying to find the LanguageY syntax for doing a thing I formerly did in LanguageX. At first I don't really use the 'new' LanguageY features, and I wind up with some pretty tortured constructs to get LanguageY to do things like I did 'em in LanguageX. But after a while, I let go of X and start programming in Y the way Y was meant to be programmed.

But this leads to a lot of angry debates - for instance, Java programmers don't see the point in C#'s properties, since the properties don't do anything that you can't do with getters/setters and the Java programmers were planning to write getters/setters anyway. C# programmers think properties give a quick, clean solution without a bunch of pretty mindless getter/setter routines. The Java programmers are correct if you think in Java; the C# programmers are correct if you think in C#.

When transitioning between languages, I have occasionally found that I really was having problems that I wasn't aware of. I'd thought I had my C++ header file dependencies ("what's the big deal?") all sorted out until I started working in Delphi. I'd thought that my Delphi UI stuff was working just fine until I learned C#. I'd thought that C# handled lists just fine until I learned Python. Etc etc.

Of course, that whole "didn't know what I was missing" applies *way* beyond the field of programming.

BTW - back to the angry debates thing - I really would like to observe that I don't think I've ever seen a C++/C#/Java discussion thread that stayed this civil.

Kewl.
 
BTW - back to the angry debates thing - I really would like to observe that I don't think I've ever seen a C++/C#/Java discussion thread that stayed this civil.

Kewl.

Just a guess, but i think it may have to do with the fact that on this board, the majority of people is used to rational and critical thinking, instead of pushing agendas of whatever kind.

After all, there is no "only wrong / only right" thing when it comes to computing. The choice of language not only depends on the language, but also on the intended application. There are loads of highly specialized languages that surely fill the very special corner in which they are used. And they do so very well.

In the end it simply matters what you want to achieve and what your overarching goals are. Want something that runs just everywhere, albeit maybe slow? Go for Java. Want perfect integration into a Microsoft environment? Go for .NET. Want to be fast and crossplatform? Go for C/C++.

As long as you know how to use a given tool or language properly, it really doesn't matter much. You can cause havoc in any language, nothing really protects you from that. Even if said havoc is just unreadable, hard to maintain code.

Arguing about "the best programming language" is as useless as arguing about the best operating system. Different people, different tastes. As long as it does the job adequately, it should be fine.

However, and this is my personal point of view, i much prefer open-source wherever possible. That goes for compilers as well as for the OS and used file formats. But again, whatever tool suits your need, use it.

As long as it doesn't involve Forth ... ;)

Greetings,

Chris
 
Hello dasmiller,



may i ask you what compiler you used? Because i never ever had that problem. What matters in the declaration is that the type(s) of the argument(s) match the implementation. The names are completely irrelevant usually. I know that _some_ compilers can generate, if instructed to do so, empty implementations of a function if there is no implementation given in the source, but only the declaration.

If you really had that bug because of only the names, i would tend to say that this is a severe bug in the compiler instead.

On a sidenote, you don't need to give any variable names in the declaration at all. Something like "int myfunc(int, int);" should be enough.

Greetings,

Chris

I'm not dasmiller, but I think the problem is that the args in the declaration is (x,y) but is switched in the definition. When someone looks in the header to see in what order to pass their args, they see (x,y) but the code will actually "read" them in the reverse order screwing up calculations.

Sortof, f.h:
Code:
extern int foo(int x, int y);

f.c:
Code:
int foo(int y, int x)
{
  set_cursor_x(x);
  set_cursor_y(y);
  draw_some_stuff();
}

call that function with "foo(900, 500);" on a 1200x800 screen and watch the fireworks, for free, irregardless of compiler.

I wouldn't blame the language though, I'd blame the originator of the code.
 
Hello asmodean,

I'm not dasmiller, but I think the problem is that the args in the declaration is (x,y) but is switched in the definition. When someone looks in the header to see in what order to pass their args, they see (x,y) but the code will actually "read" them in the reverse order screwing up calculations.

Sortof, f.h:
Code:
extern int foo(int x, int y);

f.c:
Code:
int foo(int y, int x)
{
  set_cursor_x(x);
  set_cursor_y(y);
  draw_some_stuff();
}

call that function with "foo(900, 500);" on a 1200x800 screen and watch the fireworks, for free, irregardless of compiler.

I wouldn't blame the language though, I'd blame the originator of the code.

yes, if someone feeds the arguments the wrong way because he/she assumes the declaration to be correct, that will happen.

However, as i understood it, the compiler itself mangled it up somehow. Maybe i misunderstood that it. To me it sounded as if the declaration was something like foobar(int x, int y) and the implementation was foobar(int y, int x), when called with foobar(1, 2) it would be executed as foobar(2, 1).

And that is what i mean should not happen, if it happened that way at all. Because arguments are normally passed through the stack and are position dependent, not name dependent.

But as said, i might got it wrong.

Greetings,

Chris
 
However, as i understood it, the compiler itself mangled it up somehow. Maybe i misunderstood that it. To me it sounded as if the declaration was something like foobar(int x, int y) and the implementation was foobar(int y, int x), when called with foobar(1, 2) it would be executed as foobar(2, 1).

And that is what i mean should not happen, if it happened that way at all. Because arguments are normally passed through the stack and are position dependent, not name dependent.

But as said, i might got it wrong.
You aren't thinking it through to the conclusion. In C/C++, parameters are as you say purely position dependent.

So, for example

SetScreen (int width, int height);
foo (int height, int width)
{
doit;
}

The variable names in the declaration mean *nothing*. You could declare it as:
SetScreen (int x, int y)

And then implement as
SetScreen (int width, int height)

and the compiler won't complain. The only reason for the names in the declaration is to make it readable for the human - the compiler discards the information.

So the problem is that the programmer wrote the variable names in the wrong order in the declaration compared to the implementation.
If you looked at the header, you would think the proper call should be

SetScreen (1024, 768) to set the screen to 1024x768

but the code expects

SetScreen (768, 1024)

And so madness ensues. It's not a compiler bug, it's a consequence that parameters are based on position, not names.

to make it super clear, if you had
foo (int x, float y);

you could not then write the body as
foo (float y, int x) {...}

because here you clearly changed the order int,float vs float,int.

But you sure could write
foo (int y, float x) {}

since in both cases the first parameter is int, the second float.
 
I guess that's partly a philosophical issue, and partly a question of how your brain works.

Me, I can remember pretty much functions,and what they do. And I hate printed documentation, and electronic documentation just a bit less, for use while coding.

I like printed documentation so that I don't have to switch between or have multiple windows opens. I like to leave my code on screen, as close to all the time as possible. I prefer to have anything I need to refer to, in hand.
 
As I am reading through these posts, I find little to disagree with, and to bother commenting on. The whole "each language has its use" concept is one I think we all agree on well enough, even if some of the little details and reasonings differ.

I do have more to say about ASP.NET. But, this isn't an ASP.NET thread. So, I'm not sure I should respond to that, here, or start a new one. Hmmmm.... I'll decide a little later.
 
I think one of the most important considerations has to do with scalability. I think that 99% of programmers never worry about nor never need to worry about scalability. This would be from both a language perspective and an algorithm perspective. For those 99% of programmers, who cares if garbage collection is inefficient. For the other 1%, if managing objects gets them from O(n^2) to O(log n), thats a big deal. Same deal with API's, most programmers don't exercise API's fully or need them to be scalable.

I think the most dangerous individual is one that determines that a large, functioning code base, needs to be rewritten in some other language in one go.

I think we all have our own language and toolkit prejudices, but unless the reasoning is that the language and/or available toolkits cannot scale for the required project, then its really just a question of what the people doing the project can use most efficiently.

I would choose C++ along with a good toolkit like boost any day over Java and/or C#. As far as operating system kernels go, I've seen the problems that can occur with creative (but still proper) interpretations of C standards by compiler writers looking to eek out a little extra performance. I shudder to think of all the issues that could occur with a modern C++ kernel. The advantage of C is that it's very predictable. Both in the code the compiler will produce, and the behavior of operators and objects.
 

Back
Top Bottom