• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Computer modeling and simulation

Well, assuming they had the time to reprogram some of the code, they could possibly get something together using X58 boards and Tri-SLI.

Assuming they tested mounds of GPUs for as few memory errors as possible, they could probably jury-rig a farm of i7's on X58 boards in Tri-SLI using 3xGTX 295's. A single Tri-SLI rig alone could probably crank out 6000 GFLOPs.

Of course, you could probably cook an egg on the heat that a farm of rigs would create.

And you'd also need the programming expertise to convert code to something such a rig could easily utilize.

But you'd still need more than $20k in parts. You'd probably need a few grand alone just to get an adequate cooling setup.

i forgot all about GPUs and cooling
silly me lol
then again would just the head node need something insane where the slaves are just displaying numbers and diagnostics?
a single pro-grade GPU can set you back a few grand

this might all be in a single case too
not several workstations (NIST's)

i still say running stuff off newegg for something like this would result in a lot of BSODs and hangups
creating more frustration than results

it bogged down NIST's cluster

did the NIST release any of the model data?
like gravity settings and mass of components

(i prefer crossfire myself :D)
 
i forgot all about GPUs and cooling
silly me lol
then again would just the head node need something insane where the slaves are just displaying numbers and diagnostics?
a single pro-grade GPU can set you back a few grand


He's talking about using the GPUs to do all of the calculations. I don't believe LS-DYNA supports that sort of thing right now.

i still say running stuff off newegg for something like this would result in a lot of BSODs and hangups
creating more frustration than results


We're talking about a Linux cluster here, not Windows. There would not be a lot of BSODs, because they are specific to Windows.

If a serious problem exists, the Linux kernel can panic, bringing down the entire system. Unlike a Windows BSOD, a kernel panic will actually provide you with useful information to aid in troubleshooting.
 
He's talking about using the GPUs to do all of the calculations. I don't believe LS-DYNA supports that sort of thing right now.




We're talking about a Linux cluster here, not Windows. There would not be a lot of BSODs, because they are specific to Windows.

If a serious problem exists, the Linux kernel can panic, bringing down the entire system. Unlike a Windows BSOD, a kernel panic will actually provide you with useful information to aid in troubleshooting.

you do know windows has error logs
though to be fair i have heard linux is more helpful in that respect
i know it a lot more stable
i still think the hardware just wont be able to cope with the stress unless you had a serious investment in it
20 grand is kids play
the OS is just 1 part of a complex system
 
you do know windows has error logs
though to be fair i have heard linux is more helpful in that respect
i know it a lot more stable
i still think the hardware just wont be able to cope with the stress unless you had a serious investment in it
20 grand is kids play
the OS is just 1 part of a complex system

Oh, I agree.

Even if the truther's had the know-how, it'd still be more than $20k to build an appropriate rig.

You might be able to jury-rig a single farm of a couple dozen GTX 295's, and it would give you mounds of computational power, but it would cost an arm and a leg (or 3x that if you go with something like the Quadro), and that's before you even get to figuring out how to cool the damned thing. You'd need a whole host of coolant reservoirs, piping, pumps, radiators (1 per loop). You'd need to conver the GPUs to liquid cooling, as well as get the proper parts to cool the CPU, MOSFET, and DIMM slots.

Then you'd need a large enough power supply to keep the thing running 24/7 for as long as you needed.

And, again, you'd need to convert coding to utilize the GPU's power in the computations.

If you had the know-how, you could ATTEMPT something like this.

But I don't think they'd have the time to run the computational parts (CPU, GPU, RAM) through testing (Prime95, etc), nor the technological expertise to set it up properly, nor the coding experience to change the code to actually utilize and harness the combined power of the i7 and those GPUs.

Long story short, you're not getting that done for $20,000.

You'd get a nice gaming computer, though. It'd be obsolete once the GTX 300 series came out, but it'd be nice.
 
But I don't think they'd have the time to run the computational parts (CPU, GPU, RAM) through testing (Prime95, etc), nor the technological expertise to set it up properly, nor the coding experience to change the code to actually utilize and harness the combined power of the i7 and those GPUs.


Why are we even talking about GPUs? How are you going to change the code for a closed-source, commercial application?
 
Why are we even talking about GPUs? How are you going to change the code for a closed-source, commercial application?

Because without harnessing the combined power of CPU/GPU, you haven't got a chance in hell of building a rig capable of seriously attempting such a simulation.

Which is why I continuously said that this would only be possible if you had the know-how to convert code so that computations could be done in such a setup.
 
Because without harnessing the combined power of CPU/GPU, you haven't got a chance in hell of building a rig capable of seriously attempting such a simulation.

Which is why I continuously said that this would only be possible if you had the know-how to convert code so that computations could be done in such a setup.


Can you show me where someone else has taken a closed-source, commercial application and converted it to use a GPU in the way you're describing?
 
Can you show me where someone else has taken a closed-source, commercial application and converted it to use a GPU in the way you're describing?

It involves GPUs a few generations removed, but here's a paper from CalTech on what is, essentially, just that:

http://multires.caltech.edu/pubs/GPUSim.pdf

Again, all it requires is the know-how to properly convert the code. But that is something that would not come as cheap as the parts to build the rig for your attempt.

You're not getting it done for $20,000.
 
Again, all it requires is the know-how to properly convert the code. But that is something that would not come as cheap as the parts to build the rig for your attempt.

You're not getting it done for $20,000.


I'm aware of how powerful GPUs are for certain calculations - that's not the question I'm asking.

You keep talking about "converting the code" - how can someone convert the code if they don't have the code?

The source code for LS-DYNA is not publicly available - it's not open-source. Can you show me where someone has done this sort of conversion before, without the benefit of the source code?
 
I'm aware of how powerful GPUs are for certain calculations - that's not the question I'm asking.

You keep talking about "converting the code" - how can someone convert the code if they don't have the code?

The source code for LS-DYNA is not publicly available - it's not open-source. Can you show me where someone has done this sort of conversion before, without the benefit of the source code?

That's the issue, isn't it.

But it was not a part of my hypothetical.

I was just taking you to task for your claim that $20,000 would be enough to build a setup that could run the computations.

As I have pointed out, assuming you could get the code, it's going to cost a bit more than $20,000.
 
I was just taking you to task for your claim that $20,000 would be enough to build a setup that could run the computations.

As I have pointed out, assuming you could get the code, it's going to cost a bit more than $20,000.


Why would I need GPUs to build a Linux cluster that is comparable to the cluster described by NIST in their report? They didn't use GPUs.
 
Why would I need GPUs to build a Linux cluster that is comparable to the cluster described by NIST in their report? They didn't use GPUs.

Because using GPUs would give you an easier, and relatively cheaper, means of putting together a setup with computing power necessary for such a simulation.

ETA:

Just like Justin said, you'd also need to actually view the simulation.
 
Last edited:
I recently downloaded UFO- alien invasion, then it took 6 hours with "make" and "make maps" command to render/compile the game. I found out later that there were a command "make -j 3" that would get both cpu's to work on it.

Does that GPU talk mean that I could have gotten my grafics card involved in the rendering of game grafics files?
 
Why would I need GPUs to build a Linux cluster that is comparable to the cluster described by NIST in their report? They didn't use GPUs.

did NIST describe shopping at Newegg?
 
I recently downloaded UFO- alien invasion, then it took 6 hours with "make" and "make maps" command to render/compile the game. I found out later that there were a command "make -j 3" that would get both cpu's to work on it.

Does that GPU talk mean that I could have gotten my grafics card involved in the rendering of game grafics files?

rendering is CPU intensive
i dont think so much for the GPU

the GPU would be more stressed while drawing entire frames at a high rate

when you have massive amounts of data at either you get a bottleneck

how much ram do you have?
what kind of computer?
 
did NIST describe shopping at Newegg?

I still think that, given enough time to run the proper tests (memtest, Prime95, etc), you could probably put together a setup with more than enough power to run the NIST test (given you have a workable build of the code for your setup).

I've seen examples of people creating farms based on mounds of GTX295's. One prominent one is Folding@home. Mainly, this guy:

http://atlasfolding.com/

This guy makes a complete mockery of the Truthers. His father got Huntington's Disease, and he decided to spend his money building as powerful a farm as he could to run @home as quickly as possible.

The result? He's got 23 GTX295's and 32 9800GX2's running on 28 separate mobo's (14 Intel, 14 AMD), each with 1GB of RAM. His rig, which he paid for, has this much processing power:

78 TeraFLOPs.

And Truther's argue over something like the cost to their person.
 
I still think that, given enough time to run the proper tests (memtest, Prime95, etc), you could probably put together a setup with more than enough power to run the NIST test (given you have a workable build of the code for your setup).

I've seen examples of people creating farms based on mounds of GTX295's. One prominent one is Folding@home. Mainly, this guy:

http://atlasfolding.com/

This guy makes a complete mockery of the Truthers. His father got Huntington's Disease, and he decided to spend his money building as powerful a farm as he could to run @home as quickly as possible.

The result? He's got 23 GTX295's and 32 9800GX2's running on 28 separate mobo's (14 Intel, 14 AMD), each with 1GB of RAM. His rig, which he paid for, has this much processing power:

78 TeraFLOPs.

And Truther's argue over something like the cost to their person.

i so want to play counterstrike on that lol
 
I mentioned in my OP that Wirth Research employed several hundred computers in the design of the ARX-02a.

I note that this effort significantly dwarfs the computing power available to NIST. When one considers the problem of trying to run data of this magnitude, even if one were either able to somehow obtain all of NIST's original work or otherwise start from scratch, I think it's unlikely anyone would want to limit the computing power to the level NIST had.
I believe one complete run of their analysis took several months!! To cut the computing time down you'd have to put together a much more powerful system, in that case deep44's 20K proposition is even more ridiculous.
Once again, even if you had the data, and the software, how would you realistically even begin to re-analyze it in detail unless you also had large resources at your disposal?
Perhaps in another ten years that kind of processing power will be within reach of the average person sitting at home, but not today.

I can't find budget info for Acura's efforts, but I doubt the Wirth modeling was cheap. Time = money. Saving time costs even more money.
 

Back
Top Bottom