• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Computer modeling and simulation

To cut the computing time down you'd have to put together a much more powerful system, in that case deep44's 20K proposition is even more ridiculous.


It would help if you actually understood what I'm trying to say.

Look back in this thread - people were talking about millions in hardware to run this NIST WTC7 simulation (LS-DYNA). I simply pointed out that a cluster comparable to the specs they provided could be built for under $20,000. That's a fact.

You can speculate all you want on what interconnect they were using, or how many useless GPUs they had in each server - that doesn't change my argument.
 
Alright deep, I will be submitting a FOIA request to find out how much was spent on the NIST models' software and hardware.

Would you care to make a frindly bet??

Please tell me how much you think was spent on this. I will post their EXACT responses in a post as soon as I get it back. If it turns out that we are all correct (which I suspect will be the case) will you, say, start a thread and admit that you were wrong?? That is all I ask.

I assume that you are a man, and would be willing to admit when you are wrong. Hell, I have done it a few times here, would you be willing to do that for us??

It doesn't have to be elaborate, just a simple, "I, Deep44, was wrong to assume that I could build a system comparable to NIST's for around 20,000 bones. I was wrong" That is all it would have to say. Hell, I will even let you have $50,000 if you want. Just please post what you think they spent on it.


Wanna take that bet sir??
 
It would help if you actually understood what I'm trying to say.

Look back in this thread - people were talking about millions in hardware to run this NIST WTC7 simulation (LS-DYNA). I simply pointed out that a cluster comparable to the specs they provided could be built for under $20,000. That's a fact.

You can speculate all you want on what interconnect they were using, or how many useless GPUs they had in each server - that doesn't change my argument.

With respect, I don't think you've established that 20K would do it. You made the claim, yes, I agree.
 
I recently downloaded UFO- alien invasion, then it took 6 hours with "make" and "make maps" command to render/compile the game. I found out later that there were a command "make -j 3" that would get both cpu's to work on it.

Does that GPU talk mean that I could have gotten my grafics card involved in the rendering of game grafics files?

There are two standards that the video card and driver need to support for using the GPU; OpenGL and OpenCL. Photoshop needs OpenGL 2.1 to support parallel processing. If the Linux tool supports it and the video card supplies it, you are good to go.

Wiki has articles on these standards.
 
Alright deep, I will be submitting a FOIA request to find out how much was spent on the NIST models' software and hardware.

Would you care to make a frindly bet??

Please tell me how much you think was spent on this. I will post their EXACT responses in a post as soon as I get it back. If it turns out that we are all correct (which I suspect will be the case) will you, say, start a thread and admit that you were wrong?? That is all I ask.


1. The $20K does not include software. Linux is free, and I specifically said that the cost of LS-DYNA was not included.

2. I never claimed that NIST only spent $20K. That would be absurd.

I'm sure the servers used by NIST were rack-mounted, and had all sorts of safety & security features, redundancy, on-site support, etc.

I, on the other hand, am referring to something like this:

http://obscuredclarity.blogspot.com/2008/09/24-core-linux-cluster-in-2999-case-from.html
 
I'll get the breakdown. I guarantee that you could not build a system anywhere NEAR what they had. Not even close.

Do you wanna bet, that, without the software, it still would have been MUCH higher than 20,000. Hell, I bet they spent more than 20,000 on one rack, let alone the whole dang thing.
 
I'll get the breakdown. I guarantee that you could not build a system anywhere NEAR what they had. Not even close.

Do you wanna bet, that, without the software, it still would have been MUCH higher than 20,000. Hell, I bet they spent more than 20,000 on one rack, let alone the whole dang thing.


For their cluster? Yeah - I'm sure it would be much more than that, but that doesn't mean it's any faster.

BTW - I doubt their cluster spanned more than one rack. It was only 8 nodes.
 
There are two standards that the video card and driver need to support for using the GPU; OpenGL and OpenCL. Photoshop needs OpenGL 2.1 to support parallel processing. If the Linux tool supports it and the video card supplies it, you are good to go.

Wiki has articles on these standards.


I think he's talking about using a GPU to speed up GCC (compiler). I don't believe GPU offloading is supported, at least not in a stable release.
 
So, what you guys are saying is that the calculations, even on this supercomputer, would take a while?

lol
i found a pic of a truther cluster

fastmatt_cb3pack.jpg
 
Y'know, you can do some fancy ◊◊◊◊ with a See 'n' Spell.

(Grandpa Simpson mode) I bought an Epson HX-20 with a speech module so my daughter could go to a Halloween party as Stephen Hawking. I did not know it spoke in Female. I should probably try again, but this one was inoperable and I hate to take working ones from thems what can use them.
 
rendering is CPU intensive
i dont think so much for the GPU

the GPU would be more stressed while drawing entire frames at a high rate

when you have massive amounts of data at either you get a bottleneck

how much ram do you have?
what kind of computer?

It´s just a Asus laptop, with 2 2GHz processors, 2Gb ram, and a Nvidia Gforce 7600 grafics card.
The homepage for the game did mention OpenGl, I throught it was some grafics driver for playing the game?

I am a complete newbie at Linux and downloaded am not sure how many assorted compiling tools and lib and drivers I have get-apt´ed so far to get this game installed.
 
BTW. why would anyone build supercomputers if you could get same or better result by linking some much cheaper desktops?
 
BTW. why would anyone build supercomputers if you could get same or better result by linking some much cheaper desktops?

You can make a supercomputer by the magic of parallel computing, as I remember was done in 2003 when the 64 bit PowerMac dual processor G5 came out.

How about 1100 computers tied together? They must've cost at least a thousand bucks each, so the basic hardware was only about 1 million bucks!
(That's cheap for a supercomputer).

http://findarticles.com/p/articles/mi_m0EIN/is_2003_Nov_20/ai_110335283/
 
Pricing and Availability

CxC for Mac OS X is immediately available, with prices starting at $199 per single user license. The Power Mac G5 product line comes in three muscular configurations: 1.6GHz, 1.8GHz and a dual 2GHz model, with prices starting at $1,999.
The people connecting and setting them up do not work for free.
There must be some downside to clustering, since supercomputers are build.
 
There are two standards that the video card and driver need to support for using the GPU; OpenGL and OpenCL. Photoshop needs OpenGL 2.1 to support parallel processing. If the Linux tool supports it and the video card supplies it, you are good to go.

Wiki has articles on these standards.
Or CUDA...

I think he's talking about using a GPU to speed up GCC (compiler). I don't believe GPU offloading is supported, at least not in a stable release.

That's why coding needed otherwise running time or number of clusters will skyrocket.Oh and GCC cannot offload to GPU.It is strictly for computing.

But when coding is possible then why not to use SSEx instrucitons for speed up.(Assumoing that precision required will allow it)
However it is not easy to handcode proper vectorisation...
 
Why all the worry about GPU?
The big problem is the actual calculations. The non-linearity of the system takes all the CPU you can spare. Most of the current programs (IDEAS, NASTRAN, and I would assume LS_DYNA can make use of distributed processing--defined as "whatever CPU is not beuing used".
It is the actual CALCULATIONS that take the time.
Rendering and graphics are just bells and whistles--part of the "dog and pony show" -making pretty pictures to present the results to managers and other idiots who cannot understand the actual results​
 
The people connecting and setting them up do not work for free.
There must be some downside to clustering, since supercomputers are build.

"Supercomputer" really doesn't describe how the machine gets it's speed.

These days, essentially all supercomputers are arrays of mass-produced CPUs connected in clusters. This URL is from the people that maintain the "Top 500" list of large computers in the world. I can't identify any architecture other than cluster in the top of the list I scanned.

[IBM's] Roadrunner is still holding on to its number 1 spot [June 2009] with 1.105 petaflop/s. Roadrunner is based on the IBM QS22 blades which are built with advanced versions of the processor in the Sony PlayStation 3. These nodes are connected with a commodity InfiniBand network. [by my count, at least 20,000 cores]
...
A total of 399 systems (79.8 percent) are now using Intel processors. This is slightly up from six months ago (379 systems, 75.8 percent). Intel continues to provide the processors for the largest share of TOP500 systems.

 
Last edited:
Why all the worry about GPU?
The big problem is the actual calculations. The non-linearity of the system takes all the CPU you can spare. Most of the current programs (IDEAS, NASTRAN, and I would assume LS_DYNA can make use of distributed processing--defined as "whatever CPU is not beuing used".
It is the actual CALCULATIONS that take the time.
Rendering and graphics are just bells and whistles--part of the "dog and pony show" -making pretty pictures to present the results to managers and other idiots who cannot understand the actual results​

i was just saying that high end pro-use GPUs are very expensive
the rendering and animation (running the sim) is very CPU intense not so much GPU
but youd hate to have the card slow you down when youre trying to playback

if the sim was able to be run in realtime with controllable dynamic cams at a high framerate (like a video game) the GPU would come into play a lot more

even still theyre not throwin a voodoo2 at it cause it was just layin around lol
 

Back
Top Bottom