- Pigs Fly from Flying Pig Ranch
- → Most Liked Content
Most Liked Content
Posted steve eyeon on 15 November 2011 - 03:36 PM
In the informative interviews done by Mike Seymour on FXGuide about deep compositing, he has described examples of fog and candle lights in a row as an explanation of deep composting. These can be achieved in Fusion with the Volume Fog Tool.
The volume Fog tool is a special case render that takes 3D camera and lighting along with ‘deep images’ and renders them out from the cameras point of view.
The ‘Deep Images’ have XY and Z so that it can fit a volume of a 3D scene. The WPP is used to occlude the depth, so beauty passes can be merged into the volume image.
So in Anonymous nonlinear fog volumes of varying density can be integrated with the 3D scene. A rendered beauty pass along with its WPP(RGBA-XYZ) is put into 3D environment of the Volume tool and reacts with the fog volume. Because of the World Position Pass makes a fixed reference in terms of its XYZ coordinates, a Fog volume will stay fixed correctly to the scene. Zbuffer on the other hand is from the cameras rendered point of view, so camera based depth cue fog is not fixed to the scenes point of view.
In Fusion, you can create volumes procedurally using tools like fast noise, or particles, you can also load them via EXR. By doing things procedurally or a mixture of loading and procedurally the amount of data to disk can be dramatically reduced.
In Fusion 6.3, adding lighting models to the Volume Fog tooling, takes what can be achieved in this new deep arena to a whole new level. GPU powering this development makes the Deep Volumes interactive/realtime when rendering.
Posted xmare on 17 November 2011 - 05:15 AM
one learns everyday:)
Posted samsoe on 01 November 2012 - 09:44 AM
Looking forward )
Posted KellyGriffin_PortlandOR on 26 October 2012 - 02:04 PM
Posted pingking on 24 October 2012 - 03:11 PM
Posted Kristof on 10 October 2012 - 05:13 AM
Looking forward to give it a try and thanks for sharing.
Posted Gringo on 12 September 2012 - 05:22 AM
Neat and logical interface.
But indeed, what I always missed in the CnP, was the possibility to drag multiple points at once.
And this tool still doesn't replace the Krokodove Positioner as it has no separate BG / FG inputs.
The advantage of Krokodove is that you can position corners of an image with different resolution over the background in the background's coordinates without preliminary reformatting.
Would be great to implement such functionality as the BG and FG image are often different resolutions and aspects.
So, if the FG isn't connected, the tool works just like it works now. If both image inputs are used, then it composites FG over BG and the points are transformed in the BG coordinates.
Perhaps, you could even borrow the Merge multibutton control from the Tracker to provide more output options (BG only / FG only / FG over BG (default) / BG over FG).
Concerning the Source / Destination choice, you may notice, that in the GridWarp when you switch to Source, it shows the original undistorted image which makes it easier to position the source points.
Although, you can always display the image from the previous node, so it's not so important...
In the standard CornerPositioner and in the Tracker the points are listed in the following order:
Do you also think it would be nice to stick to this standard?
I would even use those definitions (Upper-Left, Upper-Right, Bottom-Left, Bottom-Right) instead of the numbers to avoid confusion.
I hope, it doesn't sound as too much of critics, I wrote all this because I'm really looking forward to using this tool next time I need a corner positioning
Thank you once again!
Posted Ray Smithers on 25 March 2010 - 11:51 PM
ALSO please "like us" on Facebook at FPRstudios.
Posted shaily on 14 April 2006 - 08:15 AM
Pilalitos, on Apr 14 2006, 06:29 AM, said:
click on rgb then click on hls again
Posted HDSLR GUY on 30 November 2012 - 01:20 PM
uploading a still so that you can feel my pain
Posted alt on 12 November 2012 - 10:30 AM
I like to grade/test/stylize textures and make ingame effects (rendered to spritesheets via excellent Quilt plugin) in a game environment mockup I build/import in Fusion. However some stuff that would be highly useful for game assets is missing, namely support for:
- vertex colors and vertex alpha (for transparency and texture blending)
- multiple uv-channels per mesh (current, disgusting workaround for lightmap grading is to export several meshes, one per uv-set, and composite renders)
- some way to build "shaders" (so I could vary textures by their world coordinates and mix textures by vertex colors of the mesh - maybe this could be done as a custom tool?)
- also Fusion seems to recalculate vertex normals of imported objects?
I think custom tool shader would be useful for vfx work too, and it's not that hard to invent uses for multiple uv-textures or vertex colors for motion graphics work.
Posted French_Fry on 31 October 2012 - 03:46 PM
It seems to work fine when I set it up, but when I close the comp and open it back up, it says it can't get parametersm or forground images, etc...
It is not compatible with Fusion 6.4?
Posted satknot on 27 September 2012 - 04:12 AM
I'm new here and I'm sorry for my poor english.
I'm currently making a basic 3d scene with an image plane, add some bump, displacement, lighting and a camera move.
After that I just want to render it with the OGL renderer.
everything work perfectly and it render well on my machine.
The problem is that when I send it to the render farm, it doesn't work.
In the consol of backburner, I have an error message:
Renderer3D1: warning - lighting mode is set per-vertex (reason: shaders not supported)
Renderer3D1: Render aborted
The strange thing is that my lighting is set to per-pixel.
If I switchit to per-vertex, I have another message:
Renderer3D1: fatal error - failed to create GL context
Renderer3D1: Render aborted
Posted mixepix on 04 September 2012 - 08:32 AM
Is it possible to use the nice Defocus-lens blur instead of the ugly depthBlur for depth maps or is Lenscare my only option?
Posted dunn on 07 February 2012 - 05:38 AM
Posted bfloch on 09 November 2011 - 04:33 AM
The problem is that the particles can only show either one bitmap or one color. The Rendered Gradient looks like single colors therefore you can only have one pixel per patch (disable Merge1 to see the pixel effect).
The real uvs on the other hand have the 0-1 gradients per patch but are all in local space from 0-1. Connect the Render3D1 to the Texture1 node directly to see the effect.
The trick is to divide the UV space to a fraction of patches add it. That's all folks. Probably a good trick to make a video tutorial from. I will do it. Now that ...
The BC_substractFraction is poop btw. Seems like it wasn't needed or there is a logical mistake. I did this in literally 5 minutes so excuse this flaw.
Posted Tilt on 03 October 2011 - 12:19 PM
I have literally just begun working on a video tutorial about how to do this. Unfortunately the voiceover's not done yet, but here's a zip file with two images and a comp that might get you started.
What you need to take care of:
- In the projection tab of both cameras, set the mode to "Texture". Only this mode allows layering of multiple projections.
- Attach a Catcher tool to the fbx model's material input to catch the texture projections.
- Set color accumulation mode to "Blend".
- In one of the cameras, you need to increase the projection priority. This will be the foreground projection that will cover the other one.
- The Renderer3D needs to be set to Software Renderer. Unfortunately, the OpenGL one can't handle projections.
- Now you just need to mask the foreground projection before it's going into the camera. You can use a channel boolean or a matte control for example.
- projection tutorial.zip 371.92K 62 downloads
Posted Tilt on 17 August 2011 - 08:19 AM
Posted gmccomiskie on 30 August 2004 - 06:00 PM
LEE STRANAHAN learning Digital Fusion SCAM!
If you are interested in Learning Digital Fusion DO NOT repeat DO NOT pay LEE STRANAHAN any money. This man is scam artist. The website that explains the course is slick and looks reputable but donít be fooled. Dozens of people including myself have been sucked into his Learning Digital Fusion SCAM at the below site:
So if you see the following email addresses asking for payment via paypal or any other means DONíT
Lee was very quick to take our money but hardly ever responds to any correspondence once he has your funds. Over a period of 8 months he managed to tell us a variety of personnel excuses (all of which have ended up being lies). But when pressed for any idea as to when he will start the course, he mysteriously disappears and refuses to respond to any of the people, who like me, paid him in full upfront for the course (U$300). I personally lost another U$200 (total U$500) for items which lee was suppose to be have bought for me locally and posted. He has not. I would hate for anyone out there to go through what the current course members have been through with this guy. YOU HAVE BEEN WARNED!!
Posted Jules on 17 February 2009 - 06:14 PM
As much as I sometimes feel like a novice compared to some of the work I've seen displayed here, I will get the ball rolling...