• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

When it comes to math ...

Cute.


47x53 = 50x50 - 3x3 = 2491
92x88 = 90x90 - 2x2 = 8096
76x84 = 80x80 - 4x4 = 6384

It's just a difference of squares problem. I'm used to that in a more analytic context, haven't really thought much before about using it for straight arithmetic.

I do much the same for things like 63 x 91

= (60+3) x (90+1)
= (6x9x100) + (60) + (3x90) + 3
= 5400+60+270+3
= 5733
 
This thread seems to confuse knowing how to use various shortcuts to do arithmetic with the ability to be proficient at math. To me they are totally different.
.
Yes.
I encounter many situations where combinations such as the quarters and 1.75 have been solved so frequently that similar problems are easily solved, while more complicated situations can require some fingering out.
I even played around with doing long division a while ago, as I hadn't done any in years relying on the calculator.
 
I do much the same for things like 63 x 91

= (60+3) x (90+1)
= (6x9x100) + (60) + (3x90) + 3
= 5400+60+270+3
= 5733
.
It would be interesting watching that being done in 1520! :)
 

Attachments

  • FingerCounting.jpg
    FingerCounting.jpg
    133.4 KB · Views: 12
When are you told to use the fewest lines of code?

My former company was weird. Most of us learned to code on the job, so sometimes things like that were requests from management (who also learned coding and such on the job and knew little about how an IT company should really be run). And, given that a loop that simple wouldn't take any time at all to run, so there's no worry about bogging down the system. So, yeah...coding the lazy way using five lines instead of, say 30 or so doing it the right way was actually something that was quite common.
 
Last edited:
Established paradigms are next to impossible to change.

Here's an experiment you might like to try, since both you and your wife are in education. Ask fellow educators (of almost any field) what is it that makes air go into a normal household vacuum cleaner ... in other words, how does it work, in very general terms. I'll bet you almost 100% will say that the somehow air gets sucked into it when in fact, there is no suction involved at all. The air inside is first PUSHED out of some volume by means of a compressor wheel creating a low pressure zone behind it. This in turn allows the higher pressure ambient air to again be PUSHED into the device, taking along with it as much nearby debris as it can. There is no pulling (suction), only pushing. See how many get that right.

Can you give me a reliable citation for that? When I use the hose attachment, there is definitely sucking and no pushing. Perhaps I am misunderstanding your description.
 
It's just semantics.
Air moves from high pressure to low pressure.
Something has to create the low pressure.
 
Can you give me a reliable citation for that? When I use the hose attachment, there is definitely sucking and no pushing. Perhaps I am misunderstanding your description.

I'm guessing that this is like the "there is no such thing as cold" semantics argument. It's a perception vs. physics thing.

Technically, the air going into the vacuum is not being sucked in, but pushed in by the higher pressure of the air in the room around the vacuum cleaner - it's trying to equalize pressure, and doing so through a small opening causes a big rush of air. Debris in the vicinity gets pushed along for the ride.
 
Last edited:
This thread seems to confuse knowing how to use various shortcuts to do arithmetic with the ability to be proficient at math. To me they are totally different.

If you think that, then I guess I am failing to make my point. I saw a couple of examples of "shortcuts" that I know to be occasionally taught. My ex-wife showed me a bunch of "shortcuts" she learned in a college course she took on teaching math. They were new to her as well. She only taught a few when she was a teacher. Other examples in this thread were not shortcuts but rather examples of different mental processes for attacking the problem. These are not taught - it's just how people think.

My point is that natural aptitude in math runs the gamut from those who will never grasp basic math no matter what to the Human Computer Guy (Danny T). Some people can visually do math the "long way" in their heads; others cannot. Some learn shortcuts. Some of us attack problems in our own ways that are efficient and comfortable for us.

I believe that answers the OP, which asked why "so many folks, including highly skilled individuals in their fields, fail so miserably." There's no reason to expect everyone to be good or even comfortable with math just as there is no reason to expect everyone to be able to draw a cat's face, which is literally child's play - hang out in art class for first graders to see what I mean.

I believe that many with a natural aptitude for math don't realize that they even have such an aptitude. They think it's like that for everybody. It's not. And while you can teach certain rote methods, for many people those rote methods are just that. They never "get it" so they are never comfortable with it.

Consider this example given earlier:
47 X 53 = ?
92 X 88 = ?
76 X 84 = ?

I looked at that for a few seconds and quickly realized the "shortcut" that was implied. I had never heard of it before. Anybody could be taught to recognize such a problem and to use the shortcut, but I argue that if you don't realize it yourself without being told, you'll never "own" it.

People without the right aptitude have to look at each problem and run through a mental checklist of available shortcuts they have been taught. It's like a toddler using a shape shorter - the child will try the shape in each hole until it fits. A developmental milestone is met when the child recognizes the shape in his hand and associates it with the corresponding hole. Some people never make the connections that come naturally to others.

Everyone's aptitude has its limits. I remember in junior high school geometry we did proofs. I found them incredibly easy and enjoyable. Other very bright students were stumped, which I found puzzling. I also remember learning about parabolas and such. It wasn't too long before I could see in my head what it would look like without having to make the graph. As I recall, very few of us could do that. Even students who got better grades than me (I was lazy and didn't check my work) couldn't do it and commented that it was "weird" that I did.

My education and career path never required calculus, but my ex-wife's did. I remember helping her with it. I never actually did any of the math, but I could I explain to her in principle what was happening and what needed to be done. She got good grades, but I don't think she ever "owned" it.

In fact she was one of those people who hated being asked to do math in her head. She didn't really like doing it on paper. She worked hard in school and graduated college with some level of cum laude that I forget now. She was one of a handful of teachers offered an "open contract" before graduation, which in effect means, "We want you to work for us. Sign this and we'll find a spot." She was a successful student who put in lots of effort.

And yet I think some would consider her a "math averse" type of person because she would balk at having to do simple math in her head. Meanwhile, someone like myself with much less training in math and who really put very little effort into it would be considered the opposite.

If you want to "judge" us on our dedication, she would win hands down. Only nobody ever judges me because it comes easy to me. Those who judge her would come to the wrong conclusions unless they knew her like I did.
 
I'm guessing that this is like the "there is no such thing as cold" semantics argument. It's a perception vs. physics thing.

Technically, the air going into the vacuum is not being sucked in, but pushed in by the higher pressure of the air in the room around the vacuum cleaner - it's trying to equalize pressure, and doing so through a small opening causes a big rush of air. Debris in the vicinity gets pushed along for the ride.

I would also point out that the fan inside is pushing air out - it's a blower after all. This is what creates the low pressure area, and the other explanations pick it up from there.

You can look at it another way. Suppose you have an ordinary room fan with a protective wire cover. You put a piece of paper on the back. What's holding it in place? It's the air behind the paper pushing it against the fan cage. The air on the other side of the paper isn't applying some force to pull the paper.

Another visual is a rubber suction up. Push one against the wall on the inside of a sealed box. What holds it in place? Is the suction "pulling" or is the air in the box pushing? To answer that question, "suck" all the air out of the box. What happens?
 
I would also point out that the fan inside is pushing air out - it's a blower after all. This is what creates the low pressure area, and the other explanations pick it up from there.

You can look at it another way. Suppose you have an ordinary room fan with a protective wire cover. You put a piece of paper on the back. What's holding it in place? It's the air behind the paper pushing it against the fan cage. The air on the other side of the paper isn't applying some force to pull the paper.

Another visual is a rubber suction up. Push one against the wall on the inside of a sealed box. What holds it in place? Is the suction "pulling" or is the air in the box pushing? To answer that question, "suck" all the air out of the box. What happens?

Exactly. It's physics vs. perception, with a touch of semantics thrown in. In common terms and intuitive observation, one thing appears to be happening, whereas the actual driving forces behind the phenomena are doing the exact opposite. Nothing's sucking air into the vacuum, it's being pushed in. The paper isn't being sucked against the fan, it's being pushed against it. The suction cup isn't being pulled against the wall, external air pressure pins it there. And hypothermia isn't being too cold, it's being not warm enough.

Even so, the end result tends to be the same, whether using the colloquial or technical. Not always, though. Which is why it does pay to understand what's really causing these things, even if you don't think about them like that every day.
 
Even so, the end result tends to be the same, whether using the colloquial or technical. Not always, though. Which is why it does pay to understand what's really causing these things, even if you don't think about them like that every day.
I hate to extend this derail too much further, but I just want to say that the best part of pointing out that my Hoover doesn't suck is that it does get people to think about it. If somebody accepted everything that has been explained but said, "Right. That's what sucking is," I would be fine with it. They could call it hooveristics for all I care.
 
If you think that, then I guess I am failing to make my point.

No, I think you have been clear. I suspect our differences arise out of simple definition of "math". For example, consider this snippet.

I believe that many with a natural aptitude for math don't realize that they even have such an aptitude. They think it's like that for everybody. It's not. And while you can teach certain rote methods, for many people those rote methods are just that. They never "get it" so they are never comfortable with it.

Consider this example given earlier:
47 X 53 = ?
92 X 88 = ?
76 X 84 = ?

You used the word "math" then gave these examples. But I would NOT call those examples of math but examples of arithmetic.

That minor quibble out of the way, I agree with your point that there is a wide range of math aptitude out there and some people might be pretty good at it but for social stigma or fear of being wrong.


Finally, I think the real interesting part of the OP is the "pride of ignorance", so to speak, that many of us have seen around us. It's a real shame.
 
You used the word "math" then gave these examples. But I would NOT call those examples of math but examples of arithmetic.
Fair enough.

Finally, I think the real interesting part of the OP is the "pride of ignorance", so to speak, that many of us have seen around us. It's a real shame.
It *is* a shame. I also think it's also a defense mechanism to some degree with many people. Just because there's no shame in not having an aptitude for something is no reason to take pride in the fact. But then again, boys don't make passes at girls who wear glasses.
 
But it seems like the mathematically inclined tend to get snobby with those who prefer to avoid math. Suppose there's a room full of people and somebody says, "If 9 people donated a total of 72 dollars, what was the average donation?" It seems like you'd expect everyone to give the answer.

Well, what if in that same room someone said, "Who would like to come draw a cat's face on the blackboard?" Do you think everyone should volunteer? It's really easy. In the world of drawing, it's basic addition. Would you look down on anyone who said, "No way. I suck at drawing."
The difference is, nobody is ever proud of sucking at drawing.
 
Example: Suppose you had to write a program to determine the value of X given that X was an integer from 0 to 100.

* One guy loops a counter called i from 0 to 100 checking if x=i each time around. When they match, he quits and gives the answer.

* Another guy, who thinks he's being clever because he just read about a function that returns a random integer within a range, sets up an infinite loop generating a random number i and checking if x=1. Yech.

* Still another guy sees the above code and thinks, "Well, since the random number generator might repeat a number, I'm going to keep track of the numbers I've already checked so I don't check them again! I'm so smart!" This is a double-yech.

* The smart programmer writes a program that starts with the number 50 (1/2 of the range). He then checks if x>50. If not, he divides his starting number in half and checks if x>25. With just a few iterations he'll find the value of x.

How do you teach that? I never taught programming, but I did supervise and train a number of programmers. In my experience programmers either came up with that last solution or they didn't.
You teach that very easily -- in fact it is called "binary search" and is part of every first year Computer Science course. I taught it by telling my students that I can tell their Social Security number with 30 yes/no questions. "Is it bigger than 500,000,000?" That cuts the range in half. Then I keep halving it.

I bet every programmer who DID NOT come up with last solution was self-taught.

[Edited]On the second thought, if I had tp dp this particular problem -- find unknown number amount 100, -- I would use solution #1 because it takes fewest lines to write, and savings in computational time are infinitesimal. But if I had to write a program which finds one number among a billion, or one out of 100 a billion times, I would definitely use solution #4. Knowing when to use wasteful but simple algorithms, and when to use complicated but efficient ones is also something taught in Computer Science. A lot.
 
Last edited:
You teach that very easily -- in fact it is called "binary search" and is part of every first year Computer Science course. I taught it by telling my students that I can tell their Social Security number with 30 yes/no questions. "Is it bigger than 500,000,000?" That cuts the range in half. Then I keep halving it.

I bet every programmer who DID NOT come up with last solution was self-taught.
By definition, if you taught them, they did not come up with it. You did. I was self-taught, and I came up with that.

[Edited]On the second thought, if I had tp dp this particular problem -- find unknown number amount 100, -- I would use solution #1 because it takes fewest lines to write, and savings in computational time are infinitesimal. But if I had to write a program which finds one number among a billion, or one out of 100 a billion times, I would definitely use solution #4. Knowing when to use wasteful but simple algorithms, and when to use complicated but efficient ones is also something taught in Computer Science. A lot.
If you're gonna play the "real world" game then you would use solution #5: output X. There's no need to do any processing because you already have the value being passed in. It was an exercise in efficient code. How anyone could think otherwise is beyond me.

I hate to derail this thread, but the attitude you describe is the reason so much software today runs slower than it should. We have a crop of programmers who look at each little piece of inefficient code as not being a big deal. So what happens when somebody uses that horribly inefficient function inside of another loop? And the next guy uses that loop inside another loop? And then users use that program to process millions of records instead of a few dozen like during testing?

Typing is the least expensive thing a programmer does. When a programmer knows of multiple solutions and decides to save a few minutes of typing by choosing an inefficient method, he wouldn't be working for me very long. I have investigated far too many "why is this so slow?" areas of programs and said, "what lazy idiot wrote this?"

The time it takes to type out the code is really the thing that should be ignored, not the processing requirements. Of course, this is not to say that you spend a full day trying to shave a few clock cycles if the payoff is not worth it. There's always a cost-benefit analysis, but a few minutes of typing is not a factor. I can't recall ever having two options where the number of lines of code was so significant that it would set a project behind schedule.

By contrast I have seen countless routines rewritten because they were too slow due to "save some typing" programmers who didn't think about the big picture and all the possible future ramifications. This is expensive to the company. First, you have clients (usually multiple ones) complaining to service reps. Dissatisfied clients don't give referrals and sometimes speak out against you or switch vendors. The problem moves up the chain. Somebody ends up having to root around in the code to find the problem. So then it gets rewritten. That change has to be documented. It needs to go through QA again.

Just one little blip like that can cost literally hours if not days of company time (and untold amounts of revenue). It's just not worth it.

For example, one of my peeves is programmers using SELECT * to get all the fields from a table instead of just the one(s) they need. The excuse is that they don't want to type out the field names, yet tools exist to do it for you. If not, it only takes a minute type.

Meanwhile, if you have every programmer doing SELECT * for every hit on the database, it will slow things down. And when somebody modifies that table to include something like a memo field, which in many DBMS is stored in a different physical location on the disk, things slow down even more. It all adds up.

Okay, my rant is done.
 
I hate to extend this derail too much further, but I just want to say that the best part of pointing out that my Hoover doesn't suck is that it does get people to think about it. If somebody accepted everything that has been explained but said, "Right. That's what sucking is," I would be fine with it. They could call it hooveristics for all I care.

Agreed. Getting people to think about it is the key, really.

And I don't think it's too much of a derail, as it's very closely related to the point made in the OP, though yes, it's not exactly what was mentioned.

:)
 
By definition, if you taught them, they did not come up with it. You did. I was self-taught, and I came up with that.


If you're gonna play the "real world" game then you would use solution #5: output X. There's no need to do any processing because you already have the value being passed in. It was an exercise in efficient code. How anyone could think otherwise is beyond me.

I hate to derail this thread, but the attitude you describe is the reason so much software today runs slower than it should. We have a crop of programmers who look at each little piece of inefficient code as not being a big deal. So what happens when somebody uses that horribly inefficient function inside of another loop? And the next guy uses that loop inside another loop? And then users use that program to process millions of records instead of a few dozen like during testing?

Typing is the least expensive thing a programmer does. When a programmer knows of multiple solutions and decides to save a few minutes of typing by choosing an inefficient method, he wouldn't be working for me very long. I have investigated far too many "why is this so slow?" areas of programs and said, "what lazy idiot wrote this?"

The time it takes to type out the code is really the thing that should be ignored, not the processing requirements. Of course, this is not to say that you spend a full day trying to shave a few clock cycles if the payoff is not worth it. There's always a cost-benefit analysis, but a few minutes of typing is not a factor. I can't recall ever having two options where the number of lines of code was so significant that it would set a project behind schedule.

By contrast I have seen countless routines rewritten because they were too slow due to "save some typing" programmers who didn't think about the big picture and all the possible future ramifications. This is expensive to the company. First, you have clients (usually multiple ones) complaining to service reps. Dissatisfied clients don't give referrals and sometimes speak out against you or switch vendors. The problem moves up the chain. Somebody ends up having to root around in the code to find the problem. So then it gets rewritten. That change has to be documented. It needs to go through QA again.

Just one little blip like that can cost literally hours if not days of company time (and untold amounts of revenue). It's just not worth it.

For example, one of my peeves is programmers using SELECT * to get all the fields from a table instead of just the one(s) they need. The excuse is that they don't want to type out the field names, yet tools exist to do it for you. If not, it only takes a minute type.

Meanwhile, if you have every programmer doing SELECT * for every hit on the database, it will slow things down. And when somebody modifies that table to include something like a memo field, which in many DBMS is stored in a different physical location on the disk, things slow down even more. It all adds up.

Okay, my rant is done.

See, this is the same thing I was referencing when I said I'd choose 1 or 4 depending on what I was asked to do. My old company was full of managers who didn't think like this, and when this was explained to them (by us self-taught programmers no less), the typical response was a combination of "yeah but computers are faster now so that doesn't matter" and "just get it done quickly, I don't care." Which of course left us with not enough time to do it the right way in many situations. And we were often editing code already full of the "wrong way" - which doesn't excuse doing it the wrong way again, but the software was already so clunky and outdated as to not matter anyway.

The funny part? (OK, funny to only me.) Now I work for another company who's a client of my old company. So, that nonsense with the inefficient coding isn't flying with me anymore. I know what's wrong with the software we've bought from them, I know how it can be better, I know what it can do that it's not doing, and I know how much we're overpaying for it...

There's another reason for you to code the right way...it can come back to bite ya in more ways than just inefficiency.

OK...now that's enough derailing from me. :)

To sort of bring it back to the thread topic: these things I tried to point out to management were things I picked up on my own, or learned from coworkers who knew what they were doing. My managers had the same opportunities to be exposed to this information as I did, and yet they couldn't understand why it was important, and didn't want to listen when it was explained to them; they knew enough and didn't need to listen to us programmers whine about unimportant stuff like this. And so they allowed the software, and thus the company, to suffer for it.

That's a halfway decent real-world example of why the willfully-ignorant mindset can be a very bad thing.
 

Back
Top Bottom