This page is dedicated to my little patch for that awesome program: the Persistence Of Vision raytracer. In fact, to be precise, my patch is for Nathan Kopp's MegaPov v0.6a.
Note that this page is obsolete! All the changes described here where integrated into MegaPOV version 0.7.0...
All of this is pretty much out of date by now. I only keep this page around for the historical reference and because emission method 2 didn't make it into the more recent versions of POV-Ray and MegaPoV although it can be quite useful. Moreover I still use MegaPoV 0.7 instead of the newer versions...
The current version is 0.6.2. It adds several small enhancements and bugfixes to the original MegaPov. It includes the following bugfixes and patches:
noise
and dnoise
. Now if you
make a bozo
and translate it by more than 10.000
units it'll still look right. This is also true for
turbulence. Moreover this version of noise
is
actually slightly faster (although dnoise
is slightly
slower...);
sample_spacing
patch for media;
You can download the complete unix source code here. I have also compiled an executable for my machine (I run some flavor of linux, but it's so heavily tweaked that nobody would recognize it. This executable seems to run properly on a Mandrake 7.1. No warranty though).
Not much to say here: a few pictures are worth a thousand words, so here they are. The following scene has very little interrest in and of itself, but it'll show very well what I mean. Here's the scene we'll render:
#version 3.1 ; #version unofficial MegaPov 0.6; global_settings { assumed_gamma 1.0 } // ---------------------------------------- camera { location <0.0, 0.0, -4.0> look_at <0.0, 0.0, 0.0> angle 90 } sky_sphere { pigment { gradient y color_map { [0.0 color blue 0.6] [1.0 color rgb 1] } } } light_source { <-30, 30, -30> color rgb 1.0 } // ---------------------------------------- box { <-1000, 0.5, 0.0>, <1000, 10.5, 100> pigment { rgbf 1 } hollow interior { media { scattering { 1, rgb 1/16 } density { boxed color_map { [ 0.0 rgb 0.0 ] [ 0.1 rgb 0.1 ] [ 0.4 rgb 0.9 ] [ 0.5 rgb 1.0 ] } scale <2000, 10, 100> translate <0, 5.5, 50> } intervals 1 samples 5, 5 jitter 0 method 3 //method 2 } } } plane { y, -10 pigment { color rgb <0, 1, 0> } }
And the results:
Using method 2 |
Using MegaPov 0.6 method 3 |
Using µPov 0.6.1 method 3 |
sample_spacing
patch.This patch allows you to specify the number of samples in your media as a function of the size of the media rather than as a number. This is accomplished by specifying the maximum distance between two consecutive samples. That way, you can ensure that more samples are taken where they are needed.
You use this patch by adding the sample_spacing
keyword followed by a number to your media definition.
This patch works on a ray by ray basis. It will increase the minimum number of samples before the usual computations are made. If you are using media method 2 or 3, this will ensure that the samples are not spaced more than specified (unless this would require more than the maximum number of samples). If you are using media method 1, it will only ensure that the mean distance between samples is that specified.
The most obvious use of this patch is to make translucent object
by filling them with very dense scattering media. Most people who
tried it report that the object has big black areas in it unless
they increase the number of samples a lot (with the corresponding
increase in render time) or put a scaled down version of the object
inside it to limit the length of the rays (which is nearly
impossible with complex objects since a simple scale
will not do).
Here is the object we'll use. Just a simple sphere:
sphere { 0, 100 pigment { rgbf 1 } hollow interior { media { scattering { 1, rgb 1 } method 3 aa_level 8 intervals 1 samples 10, 1000 } } }
Here's the result. No comment.
Let's crank up the samples:
sphere { 0, 100 pigment { rgbf 1 } hollow interior { media { scattering { 1, rgb 1 } method 3 aa_level 8 intervals 1 samples 200, 1000 } } }
Here's the result. It took 26m11s to render.
Now, we lower the samples again, but make sure that more samples are taken where needed:
sphere { 0, 100 pigment { rgbf 1 } hollow interior { media { scattering { 1, rgb 1 } method 3 aa_level 8 intervals 1 samples 2, 1000 sample_spacing 1 } } }
Here's the result. It took 11m08s to render.
Ever made the perfect candle flame, just to realize it
disappeared completely on a white background? Then this patch is for
you. It allows you to combine an emitting media and an absorbing
media into one automagically so that it renders right on most
backgrounds. All you have to do is specify
emission_type 2
in your media statement, then play
with emission_extinction
to get the effect you want.
First, simple emission:
#declare S = sphere { 0, 1 pigment { color rgbf 1 } hollow interior { media { emission 1 method 3 intervals 1 samples 5, 5 density { spherical color_map { [ 0.0 rgb 0.0 ] [ 0.5 rgb <1.0, 0.0, 0.0> ] [ 1.0 rgb <0.0, 1.0, 0.0> ] } } } } scale 1.5 }
The media disappears on a white background.
Second, we add an absorbing media. Note that the code is much longer and if the media is complicated, there's a lot of room for error:
#declare S = sphere { 0, 1 pigment { color rgbf 1 } hollow interior { media { emission 1 method 3 intervals 1 samples 5, 5 density { spherical color_map { [ 0.0 rgb 0.0 ] [ 0.5 rgb <1.0, 0.0, 0.0> ] [ 1.0 rgb <0.0, 1.0, 0.0> ] } } } media { absorption 1 method 3 intervals 1 samples 5, 5 density { spherical color_map { [ 0.0 rgb 0.0 ] [ 0.5 rgb 0.39 ] [ 1.0 rgb 0.5 ] } } } } scale 1.5 }
The media is visible on the white background but it is darkened on
the black one and the code is difficult to write.
Finally using emission type 2:
#declare S = sphere { 0, 1 pigment { color rgbf 1 } hollow interior { media { emission 1 method 3 intervals 1 samples 5, 5 emission_type 2 emission_extinction 2 density { spherical color_map { [ 0.0 rgb 0.0 ] [ 0.5 rgb <1.0, 0.0, 0.0> ] [ 1.0 rgb <0.0, 1.0, 0.0> ] } } } } scale 1.5 }
The media is visible on the white background, it's unchanged on
the black background and the code is pretty easy to write.
This patch was inspired by a discussion on povray.binaries.images. The idea was that real life has a much greater dynamic than either photographic film or computer screens. To compensate, films "compress" the brightness so that dark areas are still visible even in the presence of much brighter areas.
This effect does something similar. The syntax is:
global_settings { post_process { compress { strength, curvature, scale } } }
The filter applies a transformation to the brightness of each pixel so that dark pixels are unmodified, but bright pixel are darkened so as to avoid saturation.
Meaning of the parameters:
I have no sample because I haven't been able to get good results with it. It does what it's supposed to do, but I haven't obtained anything aesthetically pleasing with it. On the other hand, I'm not a photography specialist, so I hope someone can do something with it (and I'm interrested in seeing the results).
Compteur de visites :