Programming Trends to Follow?

In my humble opinion I think it all boils down to what delivers the best product in a limited amount of time for the least amount of money to the customer.

When you speak of Agile development, Extreme Programming (XP), test-driven development, and domain-driven design I think these are responses to the poor results of the Waterfall SDLC lifecycle. They tend to heavily emphasize communication and quick feedback between the developer and customer. This generally increases product quality and development speed. All pluses in the eyes of the customer!

In response to the upcoming dynamic languages such as Ruby or Python. I think these are rising primarily due to the low friction of scaffolding a brand new site in a matter of minutes. For example, I could easily create a new prototype blog in ruby in a matter of 5 minutes and have a working prototype for my customer. From there you could literally add or remove properties from the model while your customer is sitting next to you, redeploy and they could see the changes immediately. This in my opinion lends itself well to delivering the highest quality product, in the least amount of time for the least amount of money.
 
That is the trap, isn't it. What if you want to create an application that wil do

*Accounts
*Sales
*Purchasing
*Reporting
*HR
*Materials Management
*Production Planning
*Warehouse Management
*Customer Relationship Management
*Supplier Relationship Management
*Custom Development and Modification
*Handle terabytes of data
*Tie all this up into a consistent package

The 'matter of minutes' is not the issue any more. It is scalability, the ability to handle tens of thousands of tables, security, reliability, availability that start to matter more.
 
That is the trap, isn't it. What if you want to create an application that wil do

*Accounts
*Sales
*Purchasing
*Reporting
*HR
*Materials Management
*Production Planning
*Warehouse Management
*Customer Relationship Management
*Supplier Relationship Management
*Custom Development and Modification
*Handle terabytes of data
*Tie all this up into a consistent package

The 'matter of minutes' is not the issue any more. It is scalability, the ability to handle tens of thousands of tables, security, reliability, availability that start to matter more.

If those are the upfront requirement then that would be communicated and obviously a different approach would be used. However keep in mind that if you rely on the database as your point of integration you will still have a monolithic system that for one doesn't scale well and more importantly isn't easily maintainable for future feature changes.

I challenge you to show me a large erp system that has easily maintainable code, scalable with the flip of a switch, has super high availability, and is super secure.
 
WowBagger brings up a god point about certain Microsoft concepts being less than advertised, but this complaint is not specific to Microsoft - it's common. Marketing types like to make up hype-words and you will see a LOT of that at trade-shows. I suggest you join/subscribe to ACM or IEEE software publications to get a professional viewpoint without (as much) hype, exaggeration.

A lot of marketing concepts have no 'there' 'there'. Middleware, CloudComputing ,in there end there it almost nothing new - just new emphasis, spin.

As software development matures as a discipline, and as more and more solutions appear to address various problems, I find myself torn between an ugly plethora of choices. I read up on the latest developments, and am constantly told "this is the next big thing!" It's something everyone is doing, all the jobs require this knowledge, etc., etc.

It's your job to sort the jive from the facts. No curious person has enough time to learn all they would like, but you need to examine these concepts deeply enough to evaluate the value. Sometimes you will be wrong.

When I first started working as a developer, Rapid Application Development (RAD) was the biggest thing since sliced bread; today, it is a stupid idea that we wasted too much time on ten years ago.

That was a clear fad IMO; tho' design/throw-away/redesign has some value. My opinion is that "design patterns" has pretty much run it's course. That doesn't mean "zero value" or that you can ignore it, but you need not become either a fanatical acolyte nor a blinder-wearing curmudgeon. Study a little. To the extent you find it useful study more.

On the other hand, there is a serious problem with ignoring all the new trends and just sticking with what you know. That's what I did the first five years or so of my career, and I found myself seriously behind the times. Certain aspects of Agile development, Extreme Programming (XP), test-driven development, and domain-driven design have become essential tools for me in the last few years; none of this was taught in school.

Understood, I agree. Again - follow trend to the extent YOU find useful and continue to explore, read.

Today the next big things are "the Cloud" (whatever that is), and dynamic programming languages like Ruby. There is still no universal development strategy for the Web. Even if you stick with Microsoft, you have to decide amongst ASP.Net, MVC, WebMatrix, Silverlight, and other technologies. After twenty years of the World Wide Web, you'd have thought they would have figured out how to develop for it by now.

Get outside of the Microsoft hothouse and look around. That's a sad, inbred culture. There is no one perfect solution to web or anything else. The better solutions, subsume and absorb the lesser ones. It takes long periods of time if the two opposing 'schools' have much depth. Don't become a foot-soldier/true-believer; see both sides clearly.


At the same time, there are many people arguing that some tried and true technologies are dead and shouldn't be bothered with. C/C++, Java, PHP...these are platforms whose time has passed, according to some prognosticators, even though they are still dominant forces in the programming world.

As background (the sort you SHOULD have gotten as an undergrad), you should be able to use a procedural language with pointers, like C, and also have decent skills in an object oriented programming/design. C++(which version?) is ubiquitous but not the best IMO. You should have have a solid introduction to functional programming - that was Lisp and Scheme in my day. but I'd suggest Haskell today.

Sadly Java was used as the sole undergrad language at some schools and the results weren't good IMO. There is nothing wrong with Java, however it's not a broad enough basis for an education; it's the Pascal of the 90's. Given the trends you really need to consider the impact of parallelism and concurrency in software - both as functional languages and parallelizing libraries and as distributed systems.

Many BSCS types couldn't understand a semaphore, race condition or sketch a device driver if their lives depended on it. If you don't understand the foundation your stand on -then you can't make good decisions about building the next floor.

Given that a single person can only learn a limited portion of what's out there, and we have no idea what's going to actually be used an three years...how does one choose what to focus on today?

Plan to 'learn' 3 or more languages per year, not necessarily 100% mastery but good basics coverage and some thinking about the rest. Then it's not so important to choose the 'perfect' language to learn next. If you really enjoy one - then stick with it longer. You'll never regret following your intellectual interests. Read some good journals, but primarily opinion & thought that contradicts your own. Become a sukisha - a person of taste; learn to express comparative advantages and disadvantages of alternatives using your experience and reading. Think about how to express the differences.
 
Last edited:
In my experience, things like Ruby are good for putting up quick demos, more as a proof of concept or validity check (is this really what the customer wanted?) than anything else. Unless the application or website is going to remain with fewer than a hundred users, you have to code the final project in something else.

Ruby itself is pure C for a reason.

I like many of the ideas behind OOP, especially the encapsulation and the ability to keep the manipulation together with the data. But it all breaks down when you need to process 170 million records recoverably into a redundant data set during a 30-minute window. The tried and true procedural languages can take advantage of the processor power in a way that the newer languages can't -- and do so while minimizing the use of resources (other programs need to be running, too).

I use C++ every day. Compilers have long since reached the stage where it's actively counterproductive to hand-code inner loops in assembler (or almost always), but I still prefer C over C++ for anything time-sensitive.

Like most disciplines, programming involves trade-offs. Speed of development vs. robustness of the final product. Ease of encapsulation vs. efficiency of algorithms. And so forth.

If you're just writing web apps for small companies, use whatever they want, or whatever you want (if you have the option). If you're going to work on large projects, or plan to serve millions of simultaneous customers, no interpreted language is going to have the chops.

Most large companies have standards for coding. Some of them are stupid, some of them are bitterly-learned lessons that come from trying to maintain something over a period of years or decades. Your bright newfangled language may be gone, or YOU may be gone, when it comes time to update the application. No one wants to pay those costs.

If you're flittering from job to job, or only doing lightweight one-offs, then you need to keep up with whatever's new and popular, if only because your customer might insist on using it. But you need the basics far more. Newer languages make tasks easier, but only by robbing you of knowing what's really happening. (Well, perhaps not robbing, but at least eliminating the need to know.)
 
Last edited:
In my experience, things like Ruby are good for putting up quick demos, more as a proof of concept or validity check (is this really what the customer wanted?) than anything else. Unless the application or website is going to remain with fewer than a hundred users, you have to code the final project is something else.
Yeah, that's what Python is for. :)
 
If those are the upfront requirement then that would be communicated and obviously a different approach would be used. However keep in mind that if you rely on the database as your point of integration you will still have a monolithic system that for one doesn't scale well and more importantly isn't easily maintainable for future feature changes.

I challenge you to show me a large erp system that has easily maintainable code, scalable with the flip of a switch, has super high availability, and is super secure.

SAP does a good job of it. They started with a hierarchical database, and quickly changed to relational. Best design decision they ever made. Their experiment with Java was a major setback.
 
Plan to 'learn' 3 or more languages per year, not necessarily 100% mastery but good basics coverage and some thinking about the rest. Then it's not so important to choose the 'perfect' language to learn next. If you really enjoy one - then stick with it longer. You'll never regret following your intellectual interests. Read some good journals, but primarily opinion & thought that contradicts your own. Become a sukisha - a person of taste; learn to express comparative advantages and disadvantages of alternatives using your experience and reading. Think about how to express the differences.

This is probably the best advice I've seen in quite a while.
 
In my experience, things like Ruby are good for putting up quick demos, more as a proof of concept or validity check (is this really what the customer wanted?) than anything else. Unless the application or website is going to remain with fewer than a hundred users, you have to code the final project in something else.

I have seen quite a few applications that bear the tell-tale markings of having been merely a demo/proof-of-concept, but some bone-headed manager said, "It looks great, ship it!"
 
That is the trap, isn't it. What if you want to create an application that wil do

*Accounts
*Sales
*Purchasing
*Reporting
*HR
*Materials Management
*Production Planning
*Warehouse Management
*Customer Relationship Management
*Supplier Relationship Management
*Custom Development and Modification
*Handle terabytes of data
*Tie all this up into a consistent package

Why would you want to do that? A distributed system would work much better.
 
SAP does a good job of it. They started with a hierarchical database, and quickly changed to relational. Best design decision they ever made. Their experiment with Java was a major setback.

I don't want to say too much about SAP, because I don't have that much experience with it. However, it does always seem to be connected to a ridiculously over-complicated operation, in my limited experience.

Dell tried to implement SAP a while back. After spending millions of dollars and thousands of man-hours, they abandoned the project as unworkable.

This sort of thing is what keeps ERP consultants in business.
 
Last edited:
'Cloud' is basically online storage. Dropbox, essentially. Google Docs. Save online and you can access your data anywhere.

If you added in the concept of your applications being available online, on demand as well as 'storage' for your data then I'd agree with your definition.
 
It CAN lead to that. If used properly, it's a way to manage complexity. Sure, it would be more efficient to write spaghetti code, provided it works perfectly and you don't have to maintain it.

Non O-O code does not have to be spaghetti code. I have worked in IT for around 25 years, starting with Cobol and now working with J2EE. I don't see any great advantage to O-O that we didn't get from sensibly modularising our sequential code - indeed I am often in dispute with my programmers estimates because I know how long something would have taken me to code 'the old way' and yet the new, 'fantastically re-useable' O-O approach gets estimated (and subsequently takes) nearly double or triple the time to write and then is a bugger to maintain because 'Ooooh if you change that you'll have to spend weeks re-testing all this other stuff 'cos it all uses it.' NB Don't take this as me thinking O-O is a waste of time - each just has its pros and cons and, in pure programming terms we really don't seem to have moved on in the last 25 years. Computing power, nicety of interface etc, absolutely but efficiency of programming logic? Nope - other than there being a lot more ready written code out there to copy and the internet making it easier to find.

Humph, I remember the claims for 4th generation languages and how they would bring programming to the end user and then the next big thing that came along was java! Hardly end-user in its syntax and setup!

It's all a bit depressing really.:p
 
SAP does a good job of it. They started with a hierarchical database, and quickly changed to relational. Best design decision they ever made. Their experiment with Java was a major setback.

Due to the easily maintainable and uncomplicated code that is the reason that all SAP consultants are willing to work for pennies on the dollar, and developers are falling over each other trying to become SAP consultants. ;)
 
Last edited:
Non O-O code does not have to be spaghetti code. I have worked in IT for around 25 years, starting with Cobol and now working with J2EE. I don't see any great advantage to O-O that we didn't get from sensibly modularising our sequential code - indeed I am often in dispute with my programmers estimates because I know how long something would have taken me to code 'the old way' and yet the new, 'fantastically re-useable' O-O approach gets estimated (and subsequently takes) nearly double or triple the time to write and then is a bugger to maintain because 'Ooooh if you change that you'll have to spend weeks re-testing all this other stuff 'cos it all uses it.'

So wait. Let me get this straight. Does this mean we're supposed to have 50 different ship functions? And if tomorrow the business changes it's ship via from FedEx to UPS we need to touch 50 different files? Damn.. I didn't realize. I've been doing it all wrong this whole time.
 
Non O-O code does not have to be spaghetti code.

I'm using the term liberally--what I mean by "spaghetti code" is purely procedural programming.

I have worked in IT for around 25 years, starting with Cobol and now working with J2EE. I don't see any great advantage to O-O that we didn't get from sensibly modularising our sequential code - indeed I am often in dispute with my programmers estimates because I know how long something would have taken me to code 'the old way' and yet the new, 'fantastically re-useable' O-O approach gets estimated (and subsequently takes) nearly double or triple the time to write

Ninety-five percent of programmers do not know how long a project will take. The other five percent are liars.

and then is a bugger to maintain because 'Ooooh if you change that you'll have to spend weeks re-testing all this other stuff 'cos it all uses it.'

Not if you do it properly, with unit tests and continuous integration. Of course, this only works if you do OOP properly.

NB Don't take this as me thinking O-O is a waste of time - each just has its pros and cons and, in pure programming terms we really don't seem to have moved on in the last 25 years. Computing power, nicety of interface etc, absolutely but efficiency of programming logic? Nope - other than there being a lot more ready written code out there to copy and the internet making it easier to find.

That's the big dilemma these days...the more efficient it is, the less maintainable, and vice-versa.

Humph, I remember the claims for 4th generation languages and how they would bring programming to the end user and then the next big thing that came along was java! Hardly end-user in its syntax and setup!

It's all a bit depressing really.:p

I agree, the industry has been far too optimistic about how easy they can make programming. Programming is hard, period. There's simply no other way around it. What the 4th generation languages do it make it (potentially!) more maintainable.
 
Java is a pig of a thing. Once you write any sufficiently complex application in Java, it becomes a massive lump of slowness, that takes forever to start.

It can be a pig in some cases, especially on memory usage. C/C++ will always be better in this regard because it doesn't need to run a VM. However, I've seen real-world Java code run faster than a C++ version. It really depends on the specifics.

The problem is the Object Oriented programming paradigm. It leads to inherenently inefficient code. SAP managed to make an incredibly complex application that used the relational database method of describing the data, and that does work. Using OO to describe your data leads to an unmanageable pile of junk.

That's not necessarily the fault of OOP so much as the fault of the developers. Not every tool is the right fit for the job. If one's primary need is to arbitrarily query large sets of tabular data, then naturally a relational database is the better choice.
 
Last edited:
... OO leads to a complexity that cannot be managed.

It can, but not always. One trouble area is unit-testing. In order to completely isolate an object from all others, any collaboration must take place through abstract interfaces. This can result in tiny, 200-line applications exploding into 15 parts for no reason other than to satisfy testing requirements.

Relational Databases have proven to be far more capable of managing complex data ...

Relational systems are capable of managing some complex data, but not all. They often have difficulty with less-than-fully-structured information, for instance.

... it's the data that code exists to manage.

Not necessarily. Not all software systems are built around data-stores.
 
Last edited:
What if you want to create an application that wil do

*Accounts
*Sales
*Purchasing
*Reporting
*HR
*Materials Management
*Production Planning
*Warehouse Management
*Customer Relationship Management
*Supplier Relationship Management
*Custom Development and Modification
*Handle terabytes of data
*Tie all this up into a consistent package

I would argue that one should not attempt to create an application that does all of these things. In my experience, attempting to fit a large system into a single model is asking for trouble.
 

Back
Top Bottom