Peskanov said:
This sounds as a great computing experience.
It was. We were a computational research institute, meaning that our mission was to adapt new hardware and algorithms to a wide range of scientific problems. We were also a 100% non-classified, open shop. At various times we had Cyber 200, an ETA-10, a Cray Y/MP, the CM, an IBM shared memory box, and a cluster of a couple of hundred RS-6000s with Fibre Channel. But the Institute self-destructed starting around 1995. I was one of the 14 core charter members, and I was one of the last to leave, in 1998.
It was fun for a while, but the way that it self-destructed really left a bad taste in my mouth. We were always a popular whipping boy; Robert Park always referred to us as "Don Fuqua's pet project," but it always seemed like stupid put-downs because Florida State University wasn't a
real university. But, if anything, we got to do more bleeding-edge stuff than any of the NSF sites because it was a new thing for the university.
Now, I have a hard time finding a job in any field. When I do get a job, the people are truly amazed with what I can do, but interviewers can't grok me. I'm still shaking with anger over an interview I had last Wednesday.
Do you know if any of the CM software is in public domain now? I have read several papers in the IEEE magazines about the CM, but all of them were quite low-level, and my interest is on the languages and tools used there.
I don't know. I haven't followed it up.
BTW, epepke, what do you think about the future of FPGA technology as a reprogramable extension for general computing? I find current efforts fascinating, but I see the same problem already found in parallel computing: software.
Software is a problem; even with our mission, we found that most scientists just wanted their 30-year-old FORTRAN programs (we called them "dusty decks") to work, but just magically faster. Memory is the other big problem.
Maybe you should have tried those anti-copy protection devices for VHS so popular in the nineties...There were great for readjusting video sync signals.
Oh, we did. But the problem was not irregularities in the horizontal retrace rate; it was the vertical retrace rate that the disc recorder couldn't sync on. And the board didn't do genlock. Monitors worked just fine, and we were able to record to tape moderately well, but it lacked the quality that we were able to get from the laserdisc recorder.
Anyway, as I said before, it's true that for simple scenes, rasterizers are much more economic. It's not a surprise they got so popular.
I think that texture mapping was what did it. The 3000 series didn't have any texture facilities at all. Textures have traditionally been of fairly limited utility in scientific and engineering work, because usually any shape has the same spacial resolution as any data you would want to use for color, so vertex colors are usually more appropriate.
However, in games, of course, textures are wonderful, because you can do something that looks good enough, cheaply.
Ok, I was just pointing the reason for this need: ordering is necesary when painting consecutive semitransparente layers, or you risk getting incosistent color results on screen.
The same happens with Direct X or any other 3D interface. IMO is a requirement of rasterizers featuring semitransparencies (nearly all of them!).
IMO, it shouldn't be, and there are better ways to do it. It's OK for games, because the designer can sit down and make assumptions based on viewpoint constraints. But for scientific applications, you don't know what kind of data or viewpoints are going to be used.
In any event, what you care about is depth, and so people do (or try to do) depth-sorting, convert that into a drawing order, and then expect the graphics card to handle it. Global scanline algorithms obviate the need to do this. There is a depth sort step, but it only applies to objects that intersect a particular scan line, and spatial coherence between successive scan lines makes it fast. It also solves the decal problem, which otherwise involves garbage like using a stencil buffer or painting the backgound polygon with read-only Z, painting the decal with read-only Z, and painting the background with only-write-to-Z.
Of course you can paint just solid color and then order should not be a requirement. But who wants 3D gfx without semitransparencies?
I don't think it is SGI fault; the only alternative to that philosophy is to maintain list of texels for every pixel and order locally at the end of the scene. This aproach would be compatible with raytracers, and is also extremely paralellizable, however you need really big iron for this!
The global scanline algorithms I used to play around with gave me, on a 640, by 480 image, 30 frames per second with a hundred textured polygons and 5 frames per second with five thousand, on a Mac with something like 90 MHz. This was all in C except for the horizontal extent drawing, which was in assembly language. The biggest bottleneck was actually float/int conversion, which requires a memory access. There's probably a way to get around that, but I sort of gave up on it, because then I managed to get full-time employment.
This is not big iron. It was similar to performance by games that used the vertical skew approximation to make all horizontal and vertical surfaces linear in the texture lookup, and way better than the performance of any games with arbitrarily oriented surfaces. Also nicely compatible with a ray-tracing back end on some objects.
But rasterizers traditionally don't work that way; they work by bit blasting, the algorithms for which are much simpler and therefore possibly better suited for hardware. If I were doing a GSA in hardware, I'd probably try to build a scanline engine on a chip and do the rest of the software on a general purpose machine.
Mmm...but rendering to texture is mostly used to simulate effects which raytracers already offers naturally, like realistic reflections.
Yes, but I can see something like a game company using it. On a machine with ray tracing it might, say, render the caustics from a movable glass onto a table and apply them as a texture. On an earlier piece of hardware, it might just not bother, and it would still look pretty good. (Might be a worthwhile thing to try in software.)
I don't see much use of scaline features if you already have a fast raytacer avalaible. But keep in mind that I have the point of view of a videogames programmer, so maybe I am missing important points.
Well, from the point of view of a videogames
marketer, they might want to support existing hardware. It seems to me that for raytracing in and of itself to get accepted, either it would have to happen overnight, or
- Glows, fires, smokes, etc...I think this kind of effects are simlulated the same way in rasterizers and raytracers, but I have some doubts.
That's where I think a volumetric approach would really shine. A fire could be a volume object using the HOD model. When looking directly at it or seeing through it to another light source, you could just integrate a single ray through the volume. Or, with a back-to-front approach, integrate a line from the light source through the volume. When using it for diffuse or dull specular illumination, sampling a reduced volume a few times should give good results.