views:

430

answers:

12

It seems to me as if anything is now possible with computer graphics. Is seems as if we can depict cloth, water, skin, anything, completely convincingly.

Are there areas that are are still a challenge, or is the focus now on finding faster algorithms and cutting rendering times?

+6  A: 
  1. Water
  2. Fire
  3. People
  4. Doing it all in realtime
  5. Physics (somewhat related to the computer graphics field)

I haven't seen digital humans that are completely convincing. Same with water and fire on any significant scale.

Look at some of the recent developments in computer game physics as examples: destructible buildings in Red Faction: Guerilla, material-based destruction in Force Unleashed, etc. Much of computer graphics revolves around video games and film, where good enough is good enough. There's a lot of clever trickery involved. There's tons of room for improvement in efficiency, scalability, thoroughness, and realism.

Steven Richards
@steven: good list. I'd say the movie Benjamin Button actually breaks new ground in realistic renderings (albeit sometimes hybrid) of people, and organic material. However, I think by far the most difficult task will be achieving the same results in Realtime.
andy
+3  A: 

I can´t think of anything that would be harder to do than a convincing human, and that was done (IMHO)in The Curious Case of Benjamin Button. Check out this site about the making of BB. The graphics for the face in the movie was made by computers, but there is still the challenge of animating the face, which cannot yet be done soley by computer.

erik
Also this TED video is a good overview of "the making of Ben's head"http://www.ted.com/index.php/talks/ed_ulbrich_shows_how_benjamin_button_got_his_face.html
CraigD
@erik: good point
andy
+5  A: 

Pretty much everything is still impossible in graphics, if you want to do it properly. Cloth, water and skin are all faked to hell and back in order to achieve realtime framerates. We're still unable to do what's probably the most fundamental effect of all: Proper lighting.

jalf
Do you mean "unable to do... Proper lighting"?
ChrisF
yep, thanks for catching that. :)
jalf
+2  A: 

I'd agree with Steven and erik. We're deep in the Uncanny Valley when it comes to people.

And jalf is correct when he points out that a lot of things are still all smoke and mirrors.

ChrisF
+2  A: 

3D
Not pictures that look like 3d but I am refering to actual 3d - once we get the 2D down then do it all again in the 3rd dim. We are just now starting to see some pretty cool stuff in the theaters as well as some very interesting new products coming out that do not require special galsses anymore.

MostlyLucid
+1  A: 

Tons of stuff is still hard or extremely slow.

Try to combine transparent objects with fogging for example.

Nils Pipenbrinck
+1  A: 

An API that insulates you from mathetmatics and is non-programmer friendly.

ojblass
+1  A: 

Some consider computer vision to be a frontier of computer graphics. It's basically CG in reverse: instead of going from model to images, you go from images to a model. Computer vision is a young field, with a wealth of open problems.

redmoskito
+3  A: 

Raster graphics are basically a huge collection of hacks. Raytracing or similar methods are more "proper." You get things like radiosity, reflection and refraction for free with raytracing. Doing raytracing in real time would be HUGE for games.

Matt Olenik
I have thought about this exact thing myself, but the amount of data to be processed is simply huge and I don't it will be feasible to do real-time raytracing with the processor architectures of today. Maybe when we have massive numbers of cores (hundreds) per chip this might be possible.But it would be so great to be able to do this. Graphics would improve drastically and graphics programming would be much simpler as you stated.
Makis
I don't think it's that far off. Intel showed Larrabee doing some basic real-time raytracing.
Matt Olenik
Photon mapping is where it's at!
Richard Szalay
Classic raytracing is only the first step of course, skin looks like it does because light is reflected both from the surface and from (progressively) smaller amounts that penetrate the skin, so your raytracing needs to be enhanced to take account of multiple possible paths for the same ray
Cruachan
+4  A: 

I've worked in videogames for 8 years, and I've seen the graphics in games and movies getting better every year.

In my opinion we need not worry about getting the graphics to look right; with the techniques we have right now and increasing cpu-power that will not be the problem.... For still shots.

The problem I see in games and movies right now is (imho) the character-animation that just doesn't look right. As beautiful as they might be rendered, characters in videogames and movies do not animate realisticly - even when motion capturing is used. Characters are missing something that makes them look natural - be it the breathing pattern, the random micromovements bodies make, the way someone randomly blinkes her eyes quicker or slower... Can't put my finger on what it is exactly, it just doesn't look natural.

So.. I think the next field of research should be (human) motion and animation, to get stuff to look right.

Led
A: 

The challenge seems to be performant modeling of surfaces WITHOUT POLYGONS. Polygons are rasterized surfaces, being displayed on rasterized screens.

devio
+1  A: 

About raytracing; raytracing is cool, but raytracing in the 'standard' way doesn't give you realistic lighting since rays are casted from the camera (the position of your eyes when you sit in front of your monitor) through the viewplane (your computer-screen) to see where they end up.

In the real world, it doesn't work that way. You don't emit radar/sonar-rays from your eyes and check what they hit; instead other objects emit energy and sometimes this energy ends up on your retina.

The proper way to calculate lighting would therefore be something like photonmapping, where every lightsource emits energy that gets transferred trough media (air, water) and reflects/refracts across/through materials.

Think about it - shooting a ray from the camera through a pixel on your screen gives you a single direction thay you'll check for light-intensity, while in reality light could come at lots of different angles to end up at that same pixel. So 'standard' raytracing doesn't give you light-scattering effects, unless you implement a special hack to take that into account. And aren't hacks the exact reason why people want to use another way besides polygon-rasterizing anyway ?

Raytracing isn't the final solution.

The only real solution is an infinite process where lights emit energy that bounces around the scene and if you're lucky ends up on your camera-lense. Since an infinite process is pretty hard to simulate, we'll have to approximate it at one point or another. Games use hacks to make stuff looks good, but in the end every other rasterizer / renderer / tracer / whatever strategy has to implement a limit - a hack - at some point.

The important thing is - does it really matter ? Are we going for a 100% simulation of real life, or is it good enough to calculate a picture that looks 100% real, whatever the technique being used ?

If you can't tell if a picture is real or CGI, does it matter what method or hacks have been used ?

Led