• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Computing predictions from 1979

Nasarius said:


Bah. That's not "non-physical cash", though. That's just a limited debit card. And yeah, my university does that.
True. To really work as non-physical cash, you need to be able to do things like :
- have anonymity of transactions
- be able to easily pass money between two private individuals
- not have to wait a week to get a replacement card if you lose or break the one you've got.

Holland (and I'm sure other European countries) has a half-way system called "Chip" or "Chipper". On your normal debit card, the chip allows you to hold small amounts of money (up to 99 Euros I think). You can charge it up and spend it without having to enter a pin number or sign for anything. It makes some things more convenient (most useful for things like parking meters and ticket machines) but is a long way from being non-physical cash.
 
What a deal.


tandy.jpg
 
epepke,
Now the positive:

1) Volume visualization will become practicable in real time and commonplace. Polygon rendering will be considered as "retro" as sprites are now.

Funny you said that, cause maybe we are not so far from quality RT voxel:

Snap.jpg


Snap2.jpg


I am coding RT voxel rendering as hobby; those captures come from my software renderer, and I am pretty sure that good volume rendering would be possible today using hardware (those snaps come from my renderer running on 500 Mhz Mac, a few frames/second).
I am not so sure however of the advantages of voxel over polygons...I guess time will tell!
 
Peskanov said:
I am coding RT voxel rendering as hobby; those captures come from my software renderer, and I am pretty sure that good volume rendering would be possible today using hardware (those snaps come from my renderer running on 500 Mhz Mac, a few frames/second).
I am not so sure however of the advantages of voxel over polygons...I guess time will tell!

Those are some pretty good images for the state-of-the-art, dude! Are you using a front-to-back or back-to-front algorithm?

Also, are you using the Altivec instructions? How about the programmable features of the graphics card? I saw some decent 2-D Navier-Stokes done in real time uisng a graphics card last year at SIGGRAPH.

As for advantages, I saw some excellent volumetric renderings of flames and smoke two years ago at SIGGRAPH.
 
Those are some pretty good images for the state-of-the-art, dude! Are you using a front-to-back or back-to-front algorithm?
Thanks! I am not using the painter algorythm, just raytracing. The idea is to render big scenes (these ones are 2048^3), and raytracing works much better.
Also, are you using the Altivec instructions? How about the programmable features of the graphics card?
The raytracer is coded in Altivec assembler. I don't think programmable gfx cards are up to the task, they are still too limited. Also voxel needs too much memory; one scene tooks 120 MB and the other 80 MB, I think.
I saw some decent 2-D Navier-Stokes done in real time uisng a graphics card last year at SIGGRAPH.
A friend of mine has a software one also quite good. I have a promising fluid simulation for Altivec, not the Navier-Stokes kind though.

The question of programmable gfx cards is quite funny. The next generation promises to bring fully programmable systems, so maybe gfx cards will become general purpose parallel computers, while the CPU remains there for compatibility!
The old prediction of parallel computing becomes fulfilled by the needs of gamers, would not it be ironic?
As for advantages, I saw some excellent volumetric renderings of flames and smoke two years ago at SIGGRAPH.
Well, I reckon voxels are king for that kind of effects.
Imho raytracing voxels has the advantage of less than linear cost for geometric complexity. I mean that doubling the geometric resolution of a scene does not double or triple the cpu cost, like in the painting polygons model.
Also raytracing has advantage for realistic ilumination models, but this is independent of using voxels or not...
My prediction for the comming years is that RT raytracing will grow a lot, even killing the traditional polygon-paint aproachs.
Take a look at the best present efforts:
http://www.realstorm.com/
About volumetric rendering...I still can't decide if it's future is promising or not.
 
Peskanov said:
My prediction for the comming years is that RT raytracing will grow a lot, even killing the traditional polygon-paint aproachs.

What's "coming years"? AFAIK, decent raytracing is orders of magnitude away from being practical in real time.
 
Have a look at the downloads from the site Peskanov mentioned...I guarantee you'll change your mind.

They run pretty crappily on my system (AMD 2500 with an unspeakably bad substitute Voodoo3DFX instead of my beloved 9500Pro which I killed), but on a high-spec new system available today plus some cunning programming on one of these new graphics cards about to be released, we could see the start of something quite amazing.
 
Aerosolben, the reason of rt raytracing being so unknown today is market pressures. Few invest in raytracing or volume rendering when there is such a big market for rasterizers.
Take a look again at the failed predictions of that 1979 book; there are not much technological dificulties, simply the markets are uninterested.
The first rt raytracer I saw was in 1995, containing a pair of infinite plains, one light, and a pair of spheres performing CSG.
A good look at the state of the art is the Raystorm engine, but if you are interested in more "academic" aproaches, you can also take a look at this hardware project, for example:

http://www.saarcor.de/
 
There is a very interesting case about "futurology", comming from Stanislaw Lem's classic "Summa technologiae" (1963).
Lem wrote a large book trying to identify future technological tendencies. His best hit was virtual reality:

http://www.cyberiad.info/english/dziela/dziela.htm
Apart from prognoses such as biotechnology of the 21 st century, cloning etc., I invented two different trends that indeed are currently present that - at the time of my writing - were only products of my imagination. These include virtual reality ( I called it phantomatics; Virtual reality corresponds to the doctrine of bishop Berkeley's esse est percipi ) and the evolution of technology from macroobjects with macrodimensions (cars, tanks, planes) to miniature objects called today molectronics - i.e. devices, including weapons, built from individual atoms, just as today the building blocks of houses are bricks. In one of later prognoses this micro-miniaturization led to the invention of "ingenious sand" that scientists today call "smart dust". Obviously, at that time this expression did not exist.

What I find interesting is that few years ago, Lem offered his thoughs about what was wrong with his anticipations:

http://www.cyberiad.info/english/dziela/dziela.htm
The confrontation of my idealistic futurological ideas with reality somewhat resembles a crash. Things that I dreamed of were not created, instead the world chose what I considered feasible and what turned out to be profitable. We did not take from the future what could have been more beautiful, what would made more sense and be more exciting, what would make us better - but what to the rich seemed more profitable and for what young people working for advertising agencies had better marketing ideas.
This means that from many possibilities technology and knowledge created we always chose a rather small part - that determined our next choices. Hence our criteria of choice were different from what I imagined some half a century ago - we did not choose what was more beautiful and useful, we chose what was simply more profitable.
 
Peskanov said:

Thanks! I am not using the painter algorythm, just raytracing. The idea is to render big scenes (these ones are 2048^3), and raytracing works much better.

I'm not referring specifically to the painter algorithm, but more to the Hue Opacity Density model, which is pretty good for objects of intermediate transparency, such as MRI models. Ironically, this actually came out several years before any practicable polygon model that really worked well for this kind of thing. Oddly enough, a lot has been done with Hue/Opacity, but the Density is where you get the ability to have nice specular reflection. It's essentially similar to index of refraction, but the HOD algorithm offers some nice optimized approximations for dealing with very complex data sets.

And, of course, radiosity. Radiosity and HOD rely on a back-to-front solution of the Kajiya rendering equation, while ray tracing relies on a front-to-back solution.

Another good use for a voxel-based approach: it's a lot easier to blow holes in things.
 
Speaking of getting rid of paper and coin money, anyone ever hear of Mondex ? MasterCard spent a lot of money on it, but then had to deal with a lot of conspiracy theory ideas about merchandise tracking and implanting chips into people's heads and hands and rediculous ties to Revelation 13. Check what Snopes has to say about it here.
MasterCard did acquire a 51% stake in London-based Mondex International in 1996, and they have been trying to establish a variety of Mondex-based applications in a number of countries in recent years. However, attempts to launch the use of Mondex smart cards as "electronic purse" alternatives to cash over the past decade have so far been disappointingly unsuccessful...
 
aerosolben said:
What's "coming years"? AFAIK, decent raytracing is orders of magnitude away from being practical in real time.

Ray-casting as a technique has been available for some time. The original 3-D version of Castle Wolfenstein used it.

We used to do real-time raytracing. OK, so it was on a Connection Machine with 65536 processors, and it cost 7 million dollars, but that was when a 50MHz single processor was hot stuff.

What always falls behind are the modeling standards. Now, everything in game graphics is based on a 1980s view of the world. Back in the 1980s, it was all based on a 1960s view of pen plotters.
 
Peskanov said:
Aerosolben, the reason of rt raytracing being so unknown today is market pressures. Few invest in raytracing or volume rendering when there is such a big market for rasterizers.
Take a look again at the failed predictions of that 1979 book; there are not much technological dificulties, simply the markets are uninterested.

The first rt raytracer I saw was in 1995, containing a pair of infinite plains, one light, and a pair of spheres performing CSG.

Ray-tracing is very old; the earliest work I know of was done in the 1960s. Tron (1982) did a fair amount of raytracing, but the first movie to use it almost exclusively for the special effects was The Last Starfighter (1984). I interviewed the guy who did the work on the Cray to bring rendering times down from two hours per frame to fifteen minutes per frame.

This was also the year the first raytraced image with motion blurring was produced (a trivial case involving pool balls on a table). I was doing some raytracing of thunderstorms by 1988 or 1989; even the SGI machines at the time (then called Silicon Graphics) were too slow to do Gouraud-shaded polygons on reasonable datasets at anything like animation speeds.

We had real time ray tracing around 1992 or 1993, which was a volumetric dataset with slicing planes, but that was on the CM-2. Your example sounds about right for a single processor at the time; ideal reflection and refraction, esp. with a single light source, and CSG are easy to do with ray tracing, hard with other techniques. Although I did see a paper on CSG with triangle models, it was a hairy algorightm.

So, it's been around a while, and real applications of graphics seem to lag a decade or two behind the state of the art.

I agree with your comments about marketing. The only way that a real-time ray tracer could make it in hardware would be if it also could do an existing rasterization protocol. It's close to hell freezing over probability that this will ever happen with Direct X, which leaves Open GL. However, despite the fact that everyone uses the Z-buffer in Open GL nowadays, the GL standard requires ordered drawing of primitives. Which you can't do with ray tracing or radiosity or any global model.
 
Yahweh said:
Yahweh's computer predictions for the next 20 years:

2. Microsoft will split into 3 smaller enterprises, one of the three companies will rise up and crush the other two thus allowing it to monopolize the market.


Haha. :hit:
 
epepke,
Ray-tracing is very old; the earliest work I know of was done in the 1960s. Tron (1982) did a fair amount of raytracing, but the first movie to use it almost exclusively for the special effects was The Last Starfighter (1984). I interviewed the guy who did the work on the Cray to bring rendering times down from two hours per frame to fifteen minutes per frame.
I hardly remenber anything about "The Last Starfighter", but I am fan of the digital artwork shown in "Tron". I thinked it used rasterizing, but now I recall it had good shadowing work, like in raytracing. However I believe I read an article some years ago about pixel antialias for rasterizers used in Tron.
Really, somebody should put all the code and documentation of Tron's computer work online as a testament to future generations. This work, and the persons involved are very important in the computer graphics field.
This was also the year the first raytraced image with motion blurring was produced (a trivial case involving pool balls on a table). I was doing some raytracing of thunderstorms by 1988 or 1989; even the SGI machines at the time (then called Silicon Graphics) were too slow to do Gouraud-shaded polygons on reasonable datasets at anything like animation speeds.
Yes, Catmull's work with the pool balls still looks incredibly realistic after 20 years!
BTW, are you sure about those SGI dates? I thinked that real time goraud was common in those times (in high end machines I mean).
We had real time ray tracing around 1992 or 1993, which was a volumetric dataset with slicing planes, but that was on the CM-2.
Hell, this code surely looked strange. I mean, looking how the CM worked, managing the data problably was a complex work, am I right?
BTW, where did you had access to a CM-2 machine, you lucky man?
CSG are easy to do with ray tracing, hard with other techniques. Although I did see a paper on CSG with triangle models, it was a hairy algorightm.
Still is, it's rarely seen. There is a game where you can dig in the walls using explosives, and also some gfx demos, but nothing to spectacular.
Now I recall that TRON motorcycles were made of CSG, so it had to be raytracing, you are right.
So, it's been around a while, and real applications of graphics seem to lag a decade or two behind the state of the art.
I am not sure this is he reason. If you look carefully, you will see that raytracing was much more advanced than rasterizing in the eighties.
However, for RT aplications, both algor. are very different. The relation "scene complexity / computations needed" is nearly linear for rasterizers and logarithmic for raytracers; sadly raytracers need a huge ammount of processing just to start working in RT!
At algo. level, I think that raytracing was already competitive with rasterizers since a few years ago. I think it's simply a marketing question now, an application must capture the attention of the market. It could come from a hardware renderer done for video production, or as a feature on a nextgen console, or as a use for FPGA boards, etc...
I agree with your comments about marketing. The only way that a real-time ray tracer could make it in hardware would be if it also could do an existing rasterization protocol. It's close to hell freezing over probability that this will ever happen with Direct X, which leaves Open GL.
Well, there is an attempt of creating it's own standar: OpenRT. However I don't know if they are trying to integrate it with OpenGL.
However, despite the fact that everyone uses the Z-buffer in Open GL nowadays, the GL standard requires ordered drawing of primitives. Which you can't do with ray tracing or radiosity or any global model.
You can, at some degree. Obtaining the Z's from a raytracer is not hard, just transform the ray collision point to camera space and use that Z.
However, the limitation is in the rasterizer. Ordering is still needed to ensure the correct final color of semi-transparent surfaces and additive and substractive light primitives (glows, flares, smoke, etc...).
It's very hard to rasterize anything behind a semitransparent surface already painted by a raytracer. It is possible by keeping a list of traced samples for every screen pixel, though.
Anyway, strong efforts would be needed to integrate both techniques and I don't see the point.
The way to introduce raytracing to current OpenGL/DirectX developers is (maybe) trying to compatibilize existing 3D data format with the raytracers. This way they could test their models/lights/animations with minimal effort...
 
Peskanov said:
BTW, are you sure about those SGI dates? I thinked that real time goraud was common in those times (in high end machines I mean).

Pretty sure. We had a 3000 at the time; the 4D didn't come out until a year or so later. You could do Gouraud shading on the 3000, but there wasn't enough screen memory to double-buffer in RGB mode with Z-buffer. Also, it was scientific work, so our datasets tended to have lots of little polygons.

Hell, this code surely looked strange. I mean, looking how the CM worked, managing the data problably was a complex work, am I right?

Not really. The truly amazing thing about the CM-2 was the software. It was brilliant how easily one could map practically any structure on what was essentially a hypercube with some buses.

BTW, where did you had access to a CM-2 machine, you lucky man?

At the Supercomputer Computations Research Institute at Florida State University. We actually had the first one with floating-point chips, so for a while, it was the fastest machine on the planet.

One negative thing was that the screen buffer sucked. It was supposed to be able to do NTSC, but it didn't quite do it well enough for our disc recorder to sync. I tried overclocking the graphics board once, but that tended to drift, so I just waited until we could get a scan converter. This was at a university, remember, so it's easier to get $7 million than $7 thousand.

The other was that it was right across from my office, which was fine for a while until the air conditioners started growing Legionaire's disease or something. It was slimy and pink, anyway. I got a blast of the air every time someone opened the door, and the last two years I spent there, I was constantly sick.

Now I recall that TRON motorcycles were made of CSG, so it had to be raytracing, you are right.

You're partially right, too. I saw a retrospective, and basically, every graphics house in the country was working on it at the time. They used a variety of techniques.

I am not sure this is he reason. If you look carefully, you will see that raytracing was much more advanced than rasterizing in the eighties.

But not fast.

You can, at some degree. Obtaining the Z's from a raytracer is not hard, just transform the ray collision point to camera space and use that Z.
However, the limitation is in the rasterizer. Ordering is still needed to ensure the correct final color of semi-transparent surfaces and additive and substractive light primitives (glows, flares, smoke, etc...).

I don't think I'm making my point clear. Open GL has to be ordering-specific because SGI says so, and if it isn't, they won't let you call it Open GL, and if you can't call it Open GL, you don't get the goodwill of the name.

The SGI folks had to jump through some serious hoops when they got into more parallel architectures for the Reality Engine line.

It's very hard to rasterize anything behind a semitransparent surface already painted by a raytracer. It is possible by keeping a list of traced samples for every screen pixel, though.

I'm thinking more along the lines of rendering onto textures and then compositing the textures. But there is a class of global scanline algorithms that I imagine would integrate fairly nicely with ray tracing.

The way to introduce raytracing to current OpenGL/DirectX developers is (maybe) trying to compatibilize existing 3D data format with the raytracers. This way they could test their models/lights/animations with minimal effort...

Maybe. I don't think that getting the models in the system is so much of a problem, as a lot of them are adapted from modelers intended to produce ray-traceable results anyway. I think a lot of game companies have a lot of investment in artistic techniques by people who know rasterizers well.
 
epepke,
Pretty sure. We had a 3000 at the time; the 4D didn't come out until a year or so later. You could do Gouraud shading on the 3000, but there wasn't enough screen memory to double-buffer in RGB mode with Z-buffer. Also, it was scientific work, so our datasets tended to have lots of little polygons.
How interesting; it's hard to believe how fast has evolved the RT gfx field in the last 15 years...If only so much interest and money would have been invested in parallel computing...
Not really. The truly amazing thing about the CM-2 was the software. It was brilliant how easily one could map practically any structure on what was essentially a hypercube with some buses.
This sounds as a great computing experience. Do you know if any of the CM software is in public domain now? I have read several papers in the IEEE magazines about the CM, but all of them were quite low-level, and my interest is on the languages and tools used there.
BTW, epepke, what do you think about the future of FPGA technology as a reprogramable extension for general computing? I find current efforts fascinating, but I see the same problem already found in parallel computing: software.
One negative thing was that the screen buffer sucked. It was supposed to be able to do NTSC, but it didn't quite do it well enough for our disc recorder to sync.
Maybe you should have tried those anti-copy protection devices for VHS so popular in the nineties...There were great for readjusting video sync signals. :)
But not fast.
Well, rasterizers were not very fast also. If I remenber correctly, there were attempts at raytracing hardware in the eighties. It was similar to Evan & Sutherland hardware, one small CPU for every pixel. Each one calculated the ray-sphere cut equation.
Anyway, as I said before, it's true that for simple scenes, rasterizers are much more economic. It's not a surprise they got so popular.
I don't think I'm making my point clear. Open GL has to be ordering-specific because SGI says so, and if it isn't, they won't let you call it Open GL, and if you can't call it Open GL, you don't get the goodwill of the name.

The SGI folks had to jump through some serious hoops when they got into more parallel architectures for the Reality Engine line.
Ok, I was just pointing the reason for this need: ordering is necesary when painting consecutive semitransparente layers, or you risk getting incosistent color results on screen.
The same happens with Direct X or any other 3D interface. IMO is a requirement of rasterizers featuring semitransparencies (nearly all of them!).
Of course you can paint just solid color and then order should not be a requirement. But who wants 3D gfx without semitransparencies?
I don't think it is SGI fault; the only alternative to that philosophy is to maintain list of texels for every pixel and order locally at the end of the scene. This aproach would be compatible with raytracers, and is also extremely paralellizable, however you need really big iron for this!
I'm thinking more along the lines of rendering onto textures and then compositing the textures. But there is a class of global scanline algorithms that I imagine would integrate fairly nicely with ray tracing.
Mmm...but rendering to texture is mostly used to simulate effects which raytracers already offers naturally, like realistic reflections.
I don't see much use of scaline features if you already have a fast raytacer avalaible. But keep in mind that I have the point of view of a videogames programmer, so maybe I am missing important points.
I don't think that getting the models in the system is so much of a problem, as a lot of them are adapted from modelers intended to produce ray-traceable results anyway. I think a lot of game companies have a lot of investment in artistic techniques by people who know rasterizers well.
Yes, but take a look at those techniques:
- Subdividing surfaces on-fly (bezier patches, displacement mapping) Not needed in raytracing.
- Different tricks for shadows and reflections. Not needed in raytracing.
- Tricks for antialias. Not needed in a decent adaptative raytracer.
- Levels of detail. Using different models for different distances should be compatible with a raytracing aproach (although not very useful).
- Glows, fires, smokes, etc...I think this kind of effects are simlulated the same way in rasterizers and raytracers, but I have some doubts.
- Non realistic rendering (cell shading, etc): would need full rework for raytracers.
I hope you get my point: most of the work put in todays gfx technology is just to superate the shortcommings of the rasterizer philosophy!
 
Peskanov said:
This sounds as a great computing experience.

It was. We were a computational research institute, meaning that our mission was to adapt new hardware and algorithms to a wide range of scientific problems. We were also a 100% non-classified, open shop. At various times we had Cyber 200, an ETA-10, a Cray Y/MP, the CM, an IBM shared memory box, and a cluster of a couple of hundred RS-6000s with Fibre Channel. But the Institute self-destructed starting around 1995. I was one of the 14 core charter members, and I was one of the last to leave, in 1998.

It was fun for a while, but the way that it self-destructed really left a bad taste in my mouth. We were always a popular whipping boy; Robert Park always referred to us as "Don Fuqua's pet project," but it always seemed like stupid put-downs because Florida State University wasn't a real university. But, if anything, we got to do more bleeding-edge stuff than any of the NSF sites because it was a new thing for the university.

Now, I have a hard time finding a job in any field. When I do get a job, the people are truly amazed with what I can do, but interviewers can't grok me. I'm still shaking with anger over an interview I had last Wednesday.

Do you know if any of the CM software is in public domain now? I have read several papers in the IEEE magazines about the CM, but all of them were quite low-level, and my interest is on the languages and tools used there.

I don't know. I haven't followed it up.

BTW, epepke, what do you think about the future of FPGA technology as a reprogramable extension for general computing? I find current efforts fascinating, but I see the same problem already found in parallel computing: software.

Software is a problem; even with our mission, we found that most scientists just wanted their 30-year-old FORTRAN programs (we called them "dusty decks") to work, but just magically faster. Memory is the other big problem.

Maybe you should have tried those anti-copy protection devices for VHS so popular in the nineties...There were great for readjusting video sync signals. :)

Oh, we did. But the problem was not irregularities in the horizontal retrace rate; it was the vertical retrace rate that the disc recorder couldn't sync on. And the board didn't do genlock. Monitors worked just fine, and we were able to record to tape moderately well, but it lacked the quality that we were able to get from the laserdisc recorder.

Anyway, as I said before, it's true that for simple scenes, rasterizers are much more economic. It's not a surprise they got so popular.

I think that texture mapping was what did it. The 3000 series didn't have any texture facilities at all. Textures have traditionally been of fairly limited utility in scientific and engineering work, because usually any shape has the same spacial resolution as any data you would want to use for color, so vertex colors are usually more appropriate.

However, in games, of course, textures are wonderful, because you can do something that looks good enough, cheaply.

Ok, I was just pointing the reason for this need: ordering is necesary when painting consecutive semitransparente layers, or you risk getting incosistent color results on screen.

The same happens with Direct X or any other 3D interface. IMO is a requirement of rasterizers featuring semitransparencies (nearly all of them!).

IMO, it shouldn't be, and there are better ways to do it. It's OK for games, because the designer can sit down and make assumptions based on viewpoint constraints. But for scientific applications, you don't know what kind of data or viewpoints are going to be used.

In any event, what you care about is depth, and so people do (or try to do) depth-sorting, convert that into a drawing order, and then expect the graphics card to handle it. Global scanline algorithms obviate the need to do this. There is a depth sort step, but it only applies to objects that intersect a particular scan line, and spatial coherence between successive scan lines makes it fast. It also solves the decal problem, which otherwise involves garbage like using a stencil buffer or painting the backgound polygon with read-only Z, painting the decal with read-only Z, and painting the background with only-write-to-Z.

Of course you can paint just solid color and then order should not be a requirement. But who wants 3D gfx without semitransparencies?
I don't think it is SGI fault; the only alternative to that philosophy is to maintain list of texels for every pixel and order locally at the end of the scene. This aproach would be compatible with raytracers, and is also extremely paralellizable, however you need really big iron for this!

The global scanline algorithms I used to play around with gave me, on a 640, by 480 image, 30 frames per second with a hundred textured polygons and 5 frames per second with five thousand, on a Mac with something like 90 MHz. This was all in C except for the horizontal extent drawing, which was in assembly language. The biggest bottleneck was actually float/int conversion, which requires a memory access. There's probably a way to get around that, but I sort of gave up on it, because then I managed to get full-time employment.

This is not big iron. It was similar to performance by games that used the vertical skew approximation to make all horizontal and vertical surfaces linear in the texture lookup, and way better than the performance of any games with arbitrarily oriented surfaces. Also nicely compatible with a ray-tracing back end on some objects.

But rasterizers traditionally don't work that way; they work by bit blasting, the algorithms for which are much simpler and therefore possibly better suited for hardware. If I were doing a GSA in hardware, I'd probably try to build a scanline engine on a chip and do the rest of the software on a general purpose machine.

Mmm...but rendering to texture is mostly used to simulate effects which raytracers already offers naturally, like realistic reflections.

Yes, but I can see something like a game company using it. On a machine with ray tracing it might, say, render the caustics from a movable glass onto a table and apply them as a texture. On an earlier piece of hardware, it might just not bother, and it would still look pretty good. (Might be a worthwhile thing to try in software.)

I don't see much use of scaline features if you already have a fast raytacer avalaible. But keep in mind that I have the point of view of a videogames programmer, so maybe I am missing important points.

Well, from the point of view of a videogames marketer, they might want to support existing hardware. It seems to me that for raytracing in and of itself to get accepted, either it would have to happen overnight, or

- Glows, fires, smokes, etc...I think this kind of effects are simlulated the same way in rasterizers and raytracers, but I have some doubts.

That's where I think a volumetric approach would really shine. A fire could be a volume object using the HOD model. When looking directly at it or seeing through it to another light source, you could just integrate a single ray through the volume. Or, with a back-to-front approach, integrate a line from the light source through the volume. When using it for diffuse or dull specular illumination, sampling a reduced volume a few times should give good results.
 
Now, I have a hard time finding a job in any field. When I do get a job, the people are truly amazed with what I can do, but interviewers can't grok me. I'm still shaking with anger over an interview I had last Wednesday.
D*mn, and I was thinking in asking you about job oportunities in the US :D
Software is a problem; even with our mission, we found that most scientists just wanted their 30-year-old FORTRAN programs (we called them "dusty decks") to work, but just magically faster. Memory is the other big problem.
To be fair, I put some blame in modern programming languages as well. Teorically trendy languages like Java should provide an easy migration path for old code, but sometimes when I see "old code" vs. "trendy language implementation", I feel sick. Code for current languages are often unreadable and full of reduntdant information.
BTW, some FPGA chips contain small memory caches, so the bandwith is distributed , and teorically unlimited (you have as much bandwith as FPGA's you use).
In any event, what you care about is depth, and so people do (or try to do) depth-sorting, convert that into a drawing order, and then expect the graphics card to handle it. Global scanline algorithms obviate the need to do this. There is a depth sort step, but it only applies to objects that intersect a particular scan line, and spatial coherence between successive scan lines makes it fast. It also solves the decal problem, which otherwise involves garbage like using a stencil buffer or painting the backgound polygon with read-only Z, painting the decal with read-only Z, and painting the background with only-write-to-Z.
I had the concept that ordering was a requirement of pixel algebra for semitransparencies, but now I see there are are ways to perform these operations without order.
I have read recently this nice article about the future of RT gfx, which has a section dedicated to this kind of problem. It's the part about the Sort-Last architectures.

http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=139&page=4

The rest of the article is also good (there are also some comments about the shortcommings of OpenGL); however I still disagree with author, I don't think rasterizers are the way ahead.
I think that when I finish my voxel raytracer I will code a RT general raytracer to see if it performs as I estimate.
This was all in C except for the horizontal extent drawing, which was in assembly language. The biggest bottleneck was actually float/int conversion, which requires a memory access. There's probably a way to get around that, but I sort of gave up on it, because then I managed to get full-time employment.
Yes, this is still one of the big faults of the PPC arch. At least they fixed that problem with Altivec, where you have instructions to do int to float and vice-versa, and sharing the registers (integer and float share the same reg. vectors).
My gfx engines until now just use the minimal floating point math possible.
This is not big iron. It was similar to performance by games that used the vertical skew approximation to make all horizontal and vertical surfaces linear in the texture lookup, and way better than the performance of any games with arbitrarily oriented surfaces. Also nicely compatible with a ray-tracing back end on some objects.
It is not big iron, but as you already said you were toying with very few polygons. 5000 polygons is the geometry of a car on a race game today. A game today can have more than 100.000 polygons on screen at full framerate.
But rasterizers traditionally don't work that way; they work by bit blasting, the algorithms for which are much simpler and therefore possibly better suited for hardware. If I were doing a GSA in hardware, I'd probably try to build a scanline engine on a chip and do the rest of the software on a general purpose machine.
This would not be a good idea imo. There exist a myth about the computational cost of geometry processing against raster paint processing. The myth says handling polygons is cheap and painting the pixels is expensive.
This stopped being true years ago. When you are handling thousands of polygons with an average area of 10 pixels, the reverse is true! And it happens to be the case already.
Yes, but I can see something like a game company using it. On a machine with ray tracing it might, say, render the caustics from a movable glass onto a table and apply them as a texture. On an earlier piece of hardware, it might just not bother, and it would still look pretty good. (Might be a worthwhile thing to try in software.)
Now I see which kind of effects you refer to.
However I still think that it's not hard to redo those methods so they fit a pure raytracer philosophy.
Well, from the point of view of a videogames marketer, they might want to support existing hardware. It seems to me that for raytracing in and of itself to get accepted, either it would have to happen overnight, or
A reasonable RT raytracer would need new hardware anyway, so it could be integrated in a new console or gfx card. Including a rasterizer and a raytracer in the same card should not be too dificult today.
After all the PS3 will include a full PS2, which includes a PS1 :D
After a few generations, the rasterizer support would decay, like 2D blitters in current 3D cards.
That's where I think a volumetric approach would really shine. A fire could be a volume object using the HOD model. When looking directly at it or seeing through it to another light source, you could just integrate a single ray through the volume. Or, with a back-to-front approach, integrate a line from the light source through the volume. When using it for diffuse or dull specular illumination, sampling a reduced volume a few times should give good results.
Yes, this kind of effects rule. My grip is that animating volumes is very expensive in computation terms (or memory if you precalculate the anim).
But yes, I agree that for liquid simulations, gases, etc, volumetric is the way ahead.
 
Peskanov said:
I have read recently this nice article about the future of RT gfx, which has a section dedicated to this kind of problem. It's the part about the Sort-Last architectures.

http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=139&page=4

Thanks for the link.

It is not big iron, but as you already said you were toying with very few polygons. 5000 polygons is the geometry of a car on a race game today. A game today can have more than 100.000 polygons on screen at full framerate.

But it was appropriate for the time and the hardware I was doing it on, which is why I compare it to games of the time.

This would not be a good idea imo. There exist a myth about the computational cost of geometry processing against raster paint processing. The myth says handling polygons is cheap and painting the pixels is expensive.

The scanline engine in a global scanline algorithm does far more than painting pixels. It has to deal with a fairly complex data structure. It's essentially equivalent to doing a 2-D representation of a set of polygons, except it's a horizontal slice through the field of view. Z becomes what you'd think of as Y.

The big advantages are that each pixel is only visited once, no matter what the scene complexity, and all the first-order ray intersections for a pixel drop out of the data structures automatically.

Also easily parallelizable. Anyway, I thought is was promising, but that's not the way rasterizers have gone.
 

Back
Top Bottom