Software

Firefox / WebGL
Soumit par Administrateur le samedi, 09/04/2010
Q&AFirefox / WebGL
Mozilla, Arun, 
April 2010,
 

 

We definitely don't feel that WebGL will go the way of VRML. Firstly, WebGL is a low-level procedural technology, not a declarative markup technology, and lends itself to the most general use case for 3D programming. Secondly, even though so far only beta versions of Chromium, Safari, and Firefox support WebGL, the number of demo sites showcasing the technology is indicative of the enthusiasm for a low-level 3D API in the market. The market is ready and enthusiastic for WebGL.?

  
Q1What prompted the Mozilla Foundation to support WebGL?
A1One of Mozilla's Principal Engineers, Vladimir Vukicevic (http://blog.vlad1.com/) originally wrote the Canvas3D extension, which was a precursor to the WebGL work. Fairly wide support for the HTML5 Canvas element by modern browsers, along with increasing support for OpenGL ES by various hardware drivers, lead us to conclude that the time was right for a 3D drawing context within the HTML5 Canvas element.
JavaScript performance has gotten better and better over the years, bolstering confidence that it will be a great environment for 3D applications. Our approach has been to create a low-level binding to OpenGL ES 2.0, so that developers familiar with OpenGL ES 2.0 will recognize many of the interfaces we are exposing in JavaScript.
  
Q2Integrating 3D within the browser : is it a risk of slowing down FireFox in performance during startup or good display? Is there any risk about the stability of browser?
A2From a JavaScript performance perspective, we definitely think that Firefox can handle 3D applications without a risk to stability. The implementation works on drivers that support OpenGL ES, and benefits from hardware acceleration. Web sites can use AJAX methods to fetch resources, and should be judicious about caching, local storage, and other factors that can affect performance. Some good demos of WebGL can be found here:
http://www.ambiera.com/copperlicht/demos.html
You can download a nightly build from http://nightly.mozilla.org/ and follow the instructions on how to enable WebGL here:
http://hacks.mozilla.org/2009/12/webgl-draft-released-today/
  
Q3In terms of performance, how does FireFox's WebGL compete with its competitors?
A3We expect Firefox to perform competitively with respect to WebGL, just as it does on JavaScript benchmarks and other performance tests.
  
Q4Today, the speed of execution of JavaScript code is a concern for developers of browsers. With WebGL who will program in JavaScript, the language is he fast enough to graft a physics engine, 3D parse large files, display particles?
A4JavaScript has already shown itself to be an excellent environment for 3D programs, as can be seen by the various demos of WebGL that are in circulation. Here's another really compelling demo, demonstrating some of the things you mention above:
http://blog.nihilogic.dk/2010/03/worlds-of-webgl.html
We take JavaScript performance pretty seriously at Mozilla, and continue to make improvements.
  
Q5Do you think the browser will thus become a vehicle for online play?
A5Yes.
  
Q6 FireFox is also declined embedded devices. What about WebGL on FireFox Mobile?
A6We've got WebGL working on the Nokia N900 which does support the underlying OpenGL ES 2.0 API. One reason we chose OpenGL ES 2.0 as the basis for our API is that we expect devices in the future to support it, thus paving the way for Firefox Mobile support for WebGL.
  
Q7Have not you afraid that in the 3D browser is little used and that the current craze easing to the image of what happened in the late 90s with VRML?
A7We definitely don't feel that WebGL will go the way of VRML. Firstly, WebGL is a low-level procedural technology, not a declarative markup technology, and lends itself to the most general use case for 3D programming. Secondly, even though so far only beta versions of Chromium, Safari, and Firefox support WebGL, the number of demo sites showcasing the technology is indicative of the enthusiasm for a low-level 3D API in the market. The market is ready and enthusiastic for WebGL.
  
Q8Today, there is great excitement around mobile applications found on iTunes, Android market ... Do not you think that the browser must remain the standard execution of mobile applications?
A8We certainly think that with the proliferation of Device APIs (http://blog.mozilla.com/standards/2009/12/30/web-standards-in-the-device-era/), the web is the mobile platform, and mobile browsers are a great platform to target for compelling mobile applications.
  
Q9With HTML5 tags multimedia (audio, video and 3D) happens gradually: when is it actually that 3D webGL be activated by default in Firefox?
A9We anticipate the release of WebGL 1.0 later this year. This will allow browsers in beta a stable specification to build against.
  
Q10Do you think HTML5 reduce the interest plugins such as Flash, Slverlight, WindowsMediaPlayer, Quicktime?
A10We absolutely think that HTML5 obviates many plugins, but till widespread implementation in all browsers, there's still use for plugins on the web. Plugins have served a useful role on the web, eventually paving the way for features such as HTML5 video, audio and WebGL.
  
 Q11While all the browser makers have their eyes fixed on the 3D, 2D Acceleration Microsoft speaks. What do you think?
A11Hardware acceleration for 2D is important for SVG (Scalable Vector Graphics), Canvas, and general page rendering time. Mozilla has invested time in this as well: http://www.basschouten.com/blog1.php/2009/11/22/direct2d-hardware-rendering-a-browser. We welcome Microsoft participation in the WebGL standardization effort.
Présentation d'IntelliCAD
Soumit par Administrateur le jeudi, 07/04/2010

 

Présentation d'IntelliCAD
IPLUS CONCEPT

 Le logiciel Intelliplus s'adresse aux architectectes (ainsi qu'aux professionnels du bâtiment et de la rĂ©novation) qui cherchent une alternative Ă  AutoCAD. Le logiciel enregistre d'ailleurs les donnĂ©es au format DWG bien connu des utilisateurs d'Autodesk.

Le logiciel, bien qu'à l'interface un peu démodée, renferme l'essentiel des outils requis pour la conception de plans et l'élevation. La prise en main est assez simple puisque les principales commandes nécessaires sont regroupées dans le menu AutoReg, AutoBld, Rendu et Walkidea. L'éditeur compare sa version premium à AutoCAD ; tandis que la version standard est fonctionnellement proche d'AutoCAD LT. Une des forces de cette solution est probablement le système de licence "On Demand" . Même si aujourd'hui une minorité des clients d'Intelli ont recours à cette licence ; elle devrait à terme représenter la majorité des utilisateurs. L'avantage de cette licence est une disponibilité du logiciel sur tous les sites et toutes les plateformes avec une système de gestion des accès au niveau du groupe. Même la plateforme Mac sera prochainement compatible. Les données 3D peuvent également être stockées en ligne.

 L'Ă©diteur assure une bonne compatibilitĂ© avec les fichiers DWG standards. Le module AutoReg permet de renseigner la topologie du site en saisissant des points en hauteur. AutoBld permet de crĂ©er les Ă©lements architecturaux. Les Ă©lĂ©vations 3D sont gĂ©nĂ©rĂ©es automatiquement car chaque Ă©lĂ©ments de la bibliothèque est en 3D. Les vues pour des sorties plans sont gĂ©nĂ©rĂ©es très facilement. Le menu rendu permet d'appliquer des matĂ©riaux rĂ©alistes (textures) sur les objets. Il y a diffĂ©rentes qualitĂ©s de rendu. En terme de qualitĂ©, nous sommes loins des rendus Final Gathering ou d'illumination globale tels que ceux proposĂ©s par les les moteurs de rĂ©fĂ©rence en matière d'images pour l'architecture : Mental Ray et VRay. L'Ă©diteur conseille ArtLantis pour amĂ©liorer le rĂ©sultat. Enfin, Walkidea permet de gĂ©nerer des visites animĂ©es (en film) en positionnant un chemin de camĂ©ra.

Le principal intérêt de la solution - face aux ténors de la CAO - est principalement le prix de vente ; qui rend la conception DWG à la portée de toutes les entreprises.

>IPLUS CONCEPT

 

Marc Petit : Autodesk 2011 DCC Softwares
Soumit par Administrateur le mercredi, 09/03/2010
Q&A

Marc Petit, 
Autodesk Media & Entertainment
Mars 2010

 

?Nous avons fait le choix d?une release annuelle pour nos produits ce qui représente un défi important pour effectuer la modernisation des produits. Nous adoptons une politique de "rénovation par appartement" qui nous permet de re-architecturer nos produits de façon continue sans causer d?inconvénients à nos clients. ?

< Maya 2011

  
Q1Autodesk profite de la GDC pour faire un lancement simultané de ses logiciels. Faut-il voir dans cet événement un signe fort lancé à l'industrie du jeu vidéo ? Du coup, que se passera-t-il au Siggraph ?
A1

En réalité, nous faisons de très gros investissements sur l?interopérabilité de nos solutions et sur nos suites logicielles, donc nous avons fait le choix d?aligner nos dates de sorties pour tous nos produits. Tout un défi pour nos équipes de développement !. Nous avons choisi le printemps car nos clients nous disent qu?il est trop tard à l?automne pour intégrer des nouveaux produits dans leur chaine de production. La proximité de GDC et de NAB est une aubaine sur le plan marketing. Par contre, nous ne manquerons pas non plus le rendez-vous de Siggraph, promis !

  
Q2Maya a une interface graphique entièrement revue, 3ds max intègre un Ă©diteur nodal de matĂ©riaux. Ces changements de layout sont-ils  l'expression de transformations plus profondes au niveau de l'architecture de ces deux programmes?
A2

Nous avons fait le choix d?une release annuelle pour nos produits ce qui reprĂ©sente un dĂ©fi important pour effectuer la modernisation des produits. Nous adoptons une politique de « rĂ©novation par appartement Â» qui nous permet de re-architecturer nos produits de façon continue sans causer d?inconvĂ©nients Ă  nos clients. 3dsmax a vu l?arrivĂ©e d?une nouvelle interface ustilisateur avec le ruban et un nouveau moteur graphique l?annĂ©e dernière, cette annĂ©e c?est l?Ă©diteur de matĂ©riaux et moteur de rendu GPU. Après Python, Maya a une toute nouvelle interface usager (proche de celles de Mudbox et de Motionbuilder), un tout nouveau moteur graphique avec des performances incroyables et un module Ă©ditorial 3d !

  
Q3Avec le millésime 2011, vous annoncez une version unifiée de mental ray. Que cela signifie-t-il ? Cette version supporte-t-elle l'accélération matérielle de type GPU ?
A3

La version unifiée de mental ray 2011 permet de calculer des images à partir de scènes de 3dsmax, Maya et Softimage sur une même ferme de calcul, cela simplifie grandement les choses pour nos clients. Cette version ne supporte pas l'accélération matérielle de type GPU.

  
Q43ds max 2011 intègre un moteur de rendu « Quicksilver Â» regroupe-t-il les technologies temps rĂ©el dĂ©jĂ  introduites dans la version prĂ©cĂ©dente (Ambiant Occlusion, HDR...) ou bien s'agit-il d'un tout nouveau moteur de rendu ?
A4

Autodesk a développé un moteur graphique très puissant que l?on retrouve dans AutoCAD, Inventor, Revit, 3dsmax et Maya. C?était intéressant de voir comment chaque équipe de produit a choisi d?utiliser la technologie pour servir leurs clients respectifs. L?équipe 3dsmax avait déjà beaucoup travaillé sur la fidélité et la performance des vues 3d et a choisi d?étendre ce moteur graphique pour en faire un moteur de rendu réaliste ultra-rapide(Quicksilver), l?équipe Maya a choisi d?utiliser le moteur pour décupler la performance du produit sur des scènes complexes avec beaucoup d?objets. L?équipe Inventor elle a voulu donner un maximum de qualité d?image dans les vues 3d. On peut s?attendre à ce que ce moteur graphique soit de plus en plus utilisé à travers de nombreux produits !

  
Q5Softimage 2011 pousse encore plus loin les concepts de ICE. Pensez-vous que l'intelligence développée dans ICE puisse être exploitée directement dans un jeu vidéo, pour obtenir des simulations de fluide, des comportements d'objets et d'animations procédurales telles que définies dans le logiciel ? Ou bien est-ce une technologie limitée aux FX ?
A5ICE intègre un moteur d?évaluation de géométrie et d?animation très compact et portable qui est présentement très adapté aux effets procéduraux, avec la 2011, on a étendu la fonctionnalité aux contraintes et à l?animation (rigging). ICE est conçu pour le parallélisme et pour les machines avec peu de mémoire. Je pense que nous serons rapidement en mesure d?offrir au marché du jeu vidéo une solution runtime qui présentera un sous ensemble des fonctionnalités présentes dans ICE à ce moment.
  
Q6Maintenant qu'Autodesk dispose d'une très large gamme DCC, à quand le bouton magique qui permettra d'exporter un modèle de 3ds max, de le texturer et d'ajouter des détails avec MudBox, d'ajouter des animations avec Motion Builder, d'ajouter des effets spéciaux avec ICE, et de calculer le rendu final sous Maya ?
A6On y travaille, on y travaille !! Beaucoup d?efforts sont faits avec FBX ! On a maintenant des échanges de modèles et de matières transparents entre Revit, AutoCAD et 3dsmax. Il suffit d?un seul click pour aller de Mudbox a Maya. Les squelettes peuvent être échangés facilement entre HumanIK, Motionbuilder, Maya et 3dsmax, les layers d?animations sont maintenant identiques entre Maya et Motionbuilder, etc... C?est un travail titanesque mais c?est très important pour le confort et la productivité de nos utilisateurs.
  
Q7Les outils de modélisation sont améliorés avec chaque nouvelle release de 3ds max, tandis que Maya et Softimage sont fonctionnellement plus pauvre sur ce plan. N'avez-vous pas envie de partager sur l'ensemble des 3 logiciels des nouvelles fonctionnalités ? Cela permettrait de mieux amortir vos développements et d'offrir plus de nouveaux features ?
A7Il y a aussi des améliorations constantes des outils de modélisation et texturation dans Maya et dans Softimage, certes pas aussi significative que dans 3dsmax. Ceci est un reflet des demandes des usagers. Nous avons des techniques assez élaborées pour analyser l?utilisation que font nos usagers des produits et nous investissons beaucoup pour recueillir leurs demandes et leurs souhaits. C?est vrai que cela coûte peu cher de porter des fonctions d?un produit à un autre, mais si cela ne correspond pas à la demande des usagers, ce n?est pas de l?argent bien investi. Si vous voulez voter avec votre logiciel, activer la fonction CEIP (Customer Experience Improvment Program, cela nous permet de recueillir anonymement beaucoup de données sur l?utilisation de nos produits !
  
Q8En tant que responsable de la branche Media & Etertainment, vous arrive-t-il encore d'utiliser vos produits pour faire un peu de 3D ? Si oui, trouvez-vous que l'expérience utilisateur est plus agréable qu'avec les logiciels 3D tels qu'ils existaient dans les années 90 (Softimage 3D, TDI, Power Animator...)?
A8J?ai Mudbox, Maya, Toxik et smoke sur mon Macbook et je m?intéresse de près à tout ce qui à trait à la cinématographie virtuelle et à la previs. Je n?avais jamais vraiment modélisé, avec Mudbox, c?est devenu un jeu d?enfant de déformer, de décorer et de personnaliser des modèles existants que j?utilise ensuite dans le nouveau module éditorial de Maya et je fini les séquences dans smoke. C?est devenu très facile de faire des séquences 3d et de les monter. Néanmoins, j?aime revenir a Softimage de temps en temps, ca reste le produit que je connais le plus en profondeur, et pour cause ;-)
  
Q9Autodesk propose SketchBook pour iPhone. Au delà de l'exploit technique, pensez vous que les systèmes embarqués peuvent servir dans les processus de création de contenus 3D ?
A9Avec Sketchbook Mobile, nous avons exposé plus d?un million et demi de nouveaux clients à la technologie et à la marque Autodesk en 2 mois. C?est tout simplement fabuleux ! Nous avons en tête de nombreuses applications qui utilisent les plateformes mobiles, en particulier dans les domaines du jeu, de l?architecture et de la mécanique. La création de contenu 3d se simplifie grandement, les technologies de sculpture, de capture et réassignation de mouvement et de rendu interactif parviennent très vite à maturité et ouvrent de nouvelles avenues, en particulier sur ces nouvelles architectures à la fois très puissantes et très simple d?utilisation. De la même façon que les cameras 35mm hyper professionnelles coexistent avec des camescopes HD, la création de contenus 3D est appelée à se diversifier, y compris sur ces nouvelles plateformes ? mais probablement pas pour les usagers professionnels actuels.
  
Q10 Vous ĂŞtes certainement un observateur attentif des nouvelles technologies. Parmi ces diffĂ©rentes technologies lesquelles ont le plus de potentiel Ă  vos yeux : TV 3D Stereoscopique, WebGL, OnLive ?
A10Ma boule de cristal est aussi performante que la vĂ´tre ou celle de vos lecteurs ! 
A moins que la diffusion de la Coupe du Monde de foot en stereo en Juin ne soit une catastrophe, puisqu?Avatar et les films d?animation ont validé la stereo pour le film, la seule question reste la vitesse d?adoption des téléviseurs 3d dans les ménages qui est probablement une fonction de la vitesse à laquelle le contenu (disque ou diffusé) sera disponible. Dans le jeu, les consoles sont déjà la, les outils aussi, il ne manque que les écrans !
La 3D interactive va s?emparer du web, les techniques et la qualitĂ© de l?interaction que l?on trouve dans les jeux video sont de plus en plus acceptĂ©es par les usagers et vont devenir la norme pour le divertissement mais aussi pour les rĂ©seaux sociaux ou le commerce Ă©lectronique. Pour que cela se passe, il faut que l?expĂ©rience en ligne soit prĂ©visible et agrĂ©able, la puissance graphique et la bande passante nĂ©cessaires (mĂŞme sur netbook) est probablement dĂ©jĂ  lĂ , donc cela ne devrait pas tarder? on peut mĂŞme penser que cela a dĂ©jĂ  commencĂ© avec l?Iphone. OnLive (avec qui nous travaillons en partenariat Ă©troit) est aussi un bon exemple. Le service utilise des serveurs puissants et 5Mbits de bande passante rĂ©seau pour livrer une expĂ©rience usager exemplaire en 3d, digne d?une console de salon. 
Je m?attends à une banalisation accélérée des solutions de virtualisation qui va complètement transformer les modes de consommation de logiciel, je vois cela comme une opportunité extraordinaire pour Autodesk !
PhotoSculpt
Soumit par Administrateur le mercredi, 09/03/2010
Q&APhotoSculpt
Hippolyte Mounier, 
Mars 2010,
 

 

? PhotoSculpt intéresse les sculpteurs numériques, par exemple pour des objets ou décors virtuel pour le cinéma car il leur offre une base de travail en ultra HD.
Il intéresse également les artistes temps réel (jeux, visualisation web) ou les artistes de textures grâce à l'export de low poly combiné avec texture + normal map + displace?

  
Q1Peux tu nous parler de tes activités dans la 3D et comment tu en es venu à développer Photosculpt.
A1Oui bien sûr, merci Benoît de m'accorder cet interview!
Je suis ingénieur de formation. Mon expérience professionnelle est dans la CAO où j'ai travaillé pendant 10 ans. Je suis aussi passionné d'infographie 2D, 3D et de photographie.
Il y a 2 ans j'ai eu l'envie de scanner des objets autour de moi avec mon seul appareil photo.
Je me suis lancé en programmation et suite à de nombreux essais, améliorations et feed-backs de la communauté 3D j'en suis venu à concevoir un produit rapide, agréable et complet à destination des infographistes 3D. Ça a donné PhotoSculpt Textures.
  
Q2Il existe des logiciels pour générer des normal map ou displacement map à partir de photo (Crazy Bump, Shadermap), quelle est l'originalité de Photosculpt ?
A2Contrairement aux softs qui font du relief basé sur l'analyse d'une seule photo. Photosculpt est basé sur deux photos et fait une reconnaissance du relief par photogrammétrie. Cette méthode est radicalement différente et est beaucoup plus complexe à mettre en ?uvre. La photo supplémentaire permet la "vision" du relief par le soft en triangulant la position de chaque pixel précisément. Assez logiquement, la qualité du relief résultant est très supérieure aux méthodes classiques basées sur une seule photo.
L'autre originalité de PhotoSculpt est qu'une fois la reconstruction 3d faite, l'objet 3D en mémoire est de très haute résolution, classiquement de 5 à 15 millions de faces. On peut alors exporter le modèle 3D à différentes subdivision (ou tailles) souhaitées, ou exporter des textures (les "maps" : de profondeur, de normale, de spécularité, d'occlusion ambiente).
  
Q3Le réalisme du relief est particulièrement bon dépend-il de la qualité des prises de vues initiales ?
A3La qualité dépend de deux paramètres, la prise de vue et le sujet lui même.
La prise de vue se passe de la façon suivante. il faut prendre 2 photos l'une après l'autre. La première avec le sujet de face, la seconde décalée à droite avec un angle de 10-20 degrés, c'est tout. Après contrôle à l'écran on peut passer au sujet suivant.
Les sujets de prédilection de PhotoSculpt sont les objets texturés 2.5D comme les murs en pierre, le bois.
Les sujets qui fonctionnent pas sont les voitures à cause des reflets, les branches d'arbre car trop complexe, les bâtiments modernes car ils n'ont souvent pas de texture, le verre aussi ne fonctionne pas.Pour plus de détail tu as des tutoriels (en anglais pour l'instant) ici:
http://www.photosculpt.net/tutorial/
  
Q4Les résultats obtenus automatiquement par le logiciel peuvent-ils être ajustés voir corrigés ?
A4Pas dans PhotoSculpt dans la version actuelle. Les corrections quand elles sont nécessaires doivent se faire de façon "traditionnelle", soit avec un logiciel de dessin tel Photoshop, soit avec un logiciel de sculpture tel Zbrush.
  
Q5Destines tu plus Photosculpt pour le temps réel ou bien le précalculé ?
A5J'ai vraiment les 2 catégories d'utilisateurs :
- Domaine précalculé : Il intéresse les sculpteurs numériques, par exemple pour des objets ou décors virtuel pour le cinéma car il leur offre une base de travail en ultra HD.
- Domaine temps réel : Il intéresse le artistes temps réel (jeux, visualisation web) ou les artistes de textures grâce à l'export de low poly combiné avec texture + normal map + displace
  
Q6 Cela doit ĂŞtre plus difficile Ă  "tiler" les textures (faire en sorte que la texture se rĂ©pète sans montrer le motif initial, "seamless") ?
A6Il s'avère en fait que c'est plus facile car PhotoSculpt dispose en mémoire de la vraie topographie du sujet. Ca peut paraître tout bête mais c'est très efficace et les résultats du "mode tileable" sont magnifique! N'hésite pas à voir les vidéos en ligne.
par exemple le tutoriel 4 ici: http://www.photosculpt.net/tutorial/
  
Q7Au final, si j'ai un arbre à créer en 3D comment puis-je me servir de Photosculpt pour faire l'ensemble de l'arbre ?
A7Si tu souhaites avoir une copie exacte de l'arbre, c'est extrêmement difficile. Il n'existe pas encore de solution efficace je crois pour scanner un arbre entier. Une face d'un arbre ça irait, mais un arbre à 360° avec ses branches ca peut être très complexe tridimentionellement.
Par contre tu peux faire plus simple : Extraire avec PhotoSculpt des très grandes textures "seamless tileable" de l'écorce et les map de displace qui vont avec. Tu appliques la texture sur un low poly d'arbre. Tu subdivise ton arbre autant que tu veux puis tu appliques les maps de displace pour faire beaucoup de détail en très peu de temps.
  
Q8DirectX 11 intègre des fonctions de tesselation dynamique de la géométrie. Est-ce que cela signifie que Photosculpt peut devenir l'outil de prédilection des artistes créant des objets pour un jeux DX11 ?
A8La tesselation est prometteuse. Gràce à cette technologie on va franchir un nouveau cap de détail dans les textures et les objets dans les jeux et le temps réel. La demande en textures photo-réalistes va logiquement exploser et mettre les artistes de jeux 3D encore plus à contribution. PhotoSculpt tombe à pic pour eux. C'est une réponse tout à fait adaptée pour faire du contenu 3D en ultra haute résolution de façon simple et rapide.
  
Q9Lorsque l'on voit Photosculpt on ne peut s'empĂŞcher de rĂŞver Ă  une version qui pourrait reconstituer l'ensemble de l'objet en faisant des photos tout autour. Est-ce envisageable ?
A9Oui j'aimerai aussi ! Le "360°" d'un point de vue technique ça existe déjà, mais les résultats sont variables. J'ai interviewé des pros dont c'est le métier (3D, archéologie/paléo/patrimoine, architecture, scènes d'accidents). Ils me disent que ça demande énormément de travail et du matériel (lasers, calibration, appareils photos, éclairages, projecteurs de barres, mires) et il n'est pas rare de passer une semaine sur un seul modèle à rassembler/ressouder les différentes textures de façon propre. Personnellement ça ne me fait pas encore exactement "rêver" pour reprendre ton expression. Mais je reste positif, ça m'intéresse, je suis persuadé que ce n'est qu'une question de temps. Je souhaite vraiment qu'on puisse trouver une solution simple et de qualité pour des infographistes 3D. Peut-être qu'un jour ce sera le futur de PhotoSculpt, qui sait?
  
Q10Peux tu nous parler de tes futurs projets dans le domaine de la 3D ?
A10Je viens de lancer PhotoSculpt Textures v1.0 le 6 mars dernier et les utilisateurs fourmillent déjà d'excellentes idées pour la suite et je les remercie. Donc tout à fait logiquement on va voir ensemble comment on va faire évoluer PhotoSculpt pour les futures versions. Je leur fait confiance on ne va pas s'ennuyer!
Furry Ball
Soumit par Administrateur le samedi, 05/03/2010
Q&AFurry Ball
Vaclav Kyba
March 2010,
 

The working process with FurryBall rendering is more like a WYSIWYG, every change you make shows up in the viewport right away.?

<Furry Ball vs Mental Ray (image Carlos Ortega)

  
Q1GPU rendering is more and more popular for DCC tools. What is the role of FurryBall : previz, final rendering ?
A1FurryBall is a GPU renderer for Maya that aims to be a final renderer although there are still some cases where it cannot compete with
physically based renderers. FurryBall is actually a viewport renderer but everything you see can be output to file(s), in fact the final
render output can be done using script commands without using the viewport at all. The working process with FurryBall rendering is more like a WYSIWYG, there is no 'RENDER' button for the viewport renderer, every change you make shows up in the viewport right away.
It is up to a user what role it will be given in the workflow. With more and more effects and techniques added most of the reasons why not to use it as a final renderer become obsolete. Important thing about FurryBall is that it has been aimed to be a perfect rendering tool for animations in the first place since the beginning.
  
Q2Can FurryBall be used with compositing software to blend render layers (AO, depth...)?
A2The first idea was there would be no need for render layers, "it is realtime, you can fine tune everything in FurryBall". Although this is
true in many cases, there were many users asking for render layers so we added full support for commonly used layers (depth, AO, shadows, reflections, color bleeding, ambient, specular, ...). These can be rendered to a file(s) using our output settings node system or displayed in viewport.
The demands from our users are very important for us and we try to add features they want as soon as possible.
  
Q3Antialiasing is a major feature, what are the capabilities of FurryBall in this area?
A3Proper texture filtering is a must have for any renderer, then there is jitter multisampling and supersampling. Jitter multisampling is a great feature for edge alias removal which is very important especially for high frequency geometry like hair. Why is it so great? Because there is only a very small performance hit and the result is comparable to a 4x4 supersample. Jitter multisampling can be also combined with standard supersampling so there is mostly no need to use some high supersample values which is always a memory and performance killer. But still supersampling in FurryBall does not slow down the rendering in a linear way like it does in most of the software renderers, ie. 2x2 supersampling for instance will not be 4 times slower but only something like 1.5x - 2.5x.
  
Q4FurryBall can display realtime shadow. What kind of shadow is it?
A4Currently there are depth map shadows with many ways of soft shadow filtering and there are also variable penumbra shadows that are hard on contact with occluder and softer farther from it, these can be also known as area light shadows. We are currently working on translucent shadow maps and plan to add deep map shadows.
  
Q5FurryBall is an addon for Maya, do you plan to support more software such as 3ds max or Softimage?
A5We do but not in the near future, we think about creating plugins for other software (especially 3DS Max) in a year or so. More probable is standalone version.
  
Q6 Can FurryBall compete with raytracer or physically based renderer?
A6Surely it can but as mentioned there are still cases where nothing what is not physically based can compete with physically based renderers. But the question is - does your render/animation really need to be physically correct? Physically based renderers are great as a reference but there are many issues when using them, especially in animation and then there are those render times.
  
Q7GPU are getting more and more powerfull, can feature such as DirectX11 dynamic tessellation be usesull for FurryBall for displaying displacement map?
A7Definitely, it is great to see how DirectX API is evolving. Subdivisions and displacements are one of the greatest features in new DX but there are many more - multithread rendering support (still important even though FurryBall is usually GPU limited), shader linking (possibility to add much better support for Maya Hypergraph and custom materials), there is also DirectCompute - something like CUDA/OpenCL but integrated into DirectX and thus great for existing DX engines. With DirectCompute there are many possibilities to add for example true raytracing or some advanced effects. There are also much higher resource limits (16K for textures and output), high quality texture compression and many more.
  
Q8Is FurryBall able to render advanced effects : fur, caustic, volumic lighting, post effects... And control those effects in realtime?
A8It would be weird if 'FurryBall' renderer was not able to render fur :) Hair/fur system and rendering is one of the greatest features in FurryBall, it is closely connected to Maya Hair so you can use Maya tools for animation and dynamics. There will be also the possibility to use custom set of curves in the next update.
There are fullscreen glow (bloom), final image filtering, post AO and color bleeding adjustments, DOF controls and many more coming. Every setting or attribute you change is displayed using FurryBall in realtime.
  
Q9 Do you think that 3d artists can gain a lot of time by using FurryBall?
A9If we weren't sure of this we wouldn't be working on FurryBall so hard. Every artist knows the 'OMG, I want this to be finally rendered' feeling when he hits the big RENDER button. The saved time is extremely important here, it can be used to fine tune the result exactly as the artist wants it to be. What for is a physically based renderer when you don't have the time to tune things because it takes forever to render anything and sometimes it is not even possible to adjust these extra details because it is physically correct you cannot change it even if it looks weird.
Serious Games : World Foklift Simulator STILL
Soumit par Administrateur le lundi, 07/02/2010

 

Serious Games : World Foklift Simulator STILL
Astragon, PC CD-ROM
fev. 2010


Joli simulateur réalisé par ASTRAGON autour des chariots du constructeur STILL. Le « Serious game » est maintenant accessible à tous.

Les amateurs pourront jouer à transporter de palettes et se rendre compte des difficultés de ce métier, et les caristes s?entraîner pour passer leur permis.

 

STILL Forklift Simulator from Benoit Vandangeon on Vimeo.


Disponible chez SimWare simulation : http://www.simw.com/index.cfm?fuseaction=dsp_product_details&pid=2294

 

OpenCL, SmallptGPU
Soumit par Administrateur le dimanche, 09/01/2010
Q&ADavid Bucciarelli 
OpenCL, SmallptGPU, SmallLuxGPU.
Feb. 2010
 

?Rendering engines can not ignore GPU computing. This doesn't mean we will stop using our CPUs. Rendering engines will have instead to take advantage of both worlds. It is wrong to look for a CPU Vs GPU "war"; we need instead to take advantage of the new CPU+GPU scenario.?

 

 

Q1Please give a brief description of your activities in the field of 3D.
A1I wrote my first program for drawing a 3D wireframe model on a Commodore VIC20 when I was 11. I love about every aspect of computer graphics since than. My first work was in the the field of VirtualReality at one of robotic laboratories of "Scuola Superiore S.Anna" at Pisa University.
In the same period I started work also on my first open source project: a Mesa (i.e. OpenGL) driver based on Glide for 3Dfx cards. It was a successful project and it was used in many applications including the Linux version of Quake. I remember that period with great pleasure and reading "David Bucciarelli wrote and maintained the 3Dfx Glide driver. Thousands of Linux/Quake players thank David!" still make me smile.
I'm actually working in the field of telecommunications and I'm one of the developer at http://www.luxrender.net in my free time. Since the release of OpenCL, I have developed few small demo like SmallptGPU and SmallLuxGPU.
  
Q2You are famous for your works on GPU computing. What makes GPU interesting for other purpose than rasterization (z-buffer rendering)?
A2Their unmatched floating point performance/price ratio. Modern GPUs can offer unparalleled floating point performances at a low price thanks to the high GPU demand in the entertainment market.
  
Q3SmallptCPU and SmallptGPU are two programs that help comparing the performances of CPU and GPU. At the same given quality what is the benefit of using a GPU for raytracing?
A3

Raytracing has ever been one of the most power angry application and one of the most easy to parallelize too. GPUs offer both high performances and require an high level of parallelism. It looks like a perfect marriage.

  
Q4Are GPU even faster than multi-core CPU?
A4

Yes and no. There are application fields where GPGPUs [http://www.gpgpu.org] can solve problems several time faster than any CPU. But there also problems where CPUs are several time faster than GPUs.
GPUs use a quite peculiar way to work: they are not particular fast to execute a task however they can perform a same task on thousand of data at the same time (i.e. SIMD architecture). GPU is going to be faster than a multi-core CPU if and only if you can find a way to express the solution to your problem in term of single operations to apply to multiple data (i.e. data parallel model).

  
Q5You have also developed SmallLuxGPU : it gives the opportunity of rendering 3D scenes using GPU ressources. Are GPU the future of rendering engine? What gain of time can 3D artist expect?
A5

For sure, rendering engines can not ignore GPU computing. This doesn't mean we will stop using our CPUs. Rendering engines will have instead to take advantage of both worlds. It is wrong to look for a CPU Vs GPU "war"; we need instead to take advantage of the new CPU+GPU scenario.
This will great improve the tools available for 3D artists by increasing the level of interactivity , the responsiveness and shortening the rendering times by an order of magnitude.

  
Q6As a developer, what do you think of GPU programming. It's considering to be specific and hard to develop on this platform?
A6

It is extremely challenging but highly rewarding too. We are still in an early stage of the development of the drivers and tools for GPU computing. I'm very often facing compiler crashes, driver bugs and interoperability problems between different vendors.
However the reward is an level of performance never seen before: it is worth the effort.

  
Q7Most of today's projects of GPU rendering is based on Cuda (vRay, mental ray), is OpenCL slower? What is your point of view about OpenCL vs Cuda? AMD vs nVidia in the field of GPU computing?
A7

OpenCL is younger than CUDA and, as any technology in an early stage of its development, suffers of typical problems like driver bugs, missing tools, not optimized implementations, etc. However CUDA is a proprietary API. A proprietary API is an advantage for the vendor, not for the users or developers.
If I have an option I will always choose an open standard, both as user and as developer. NVIDIA has been a precursor in the field GPGPUs. New Fermi architecture has been presented more like new GPU computing processor than a GPU for the 3D entertainment market.
However Fermi is a bit late while AMD HD5xx family is available since September. I have both AMD and NVIDIA GPUs and it looks like AMD is putting more effort on OpenCL support too.

  
Q8nVidia is announcing that CUDA can speed up C++ applications up to x100? Do you believe it's true?
A8MandelGPU [http://davibu.interfree.it/opencl/mandelgpu/mandelGPU.html] has been my very first test with OpenCL and it was running 61 time faster on my GPU than my CPU. However there are other problems where a CPU can easy outperform GPU too. Developing an application that feet the requirement of a GPU architecture is hard, complex and time consuming (i.e. expansive). GPUs are not the answer to all computational problems however they really shine on some.
  
Q9 What are your next projects in the field of 3D?
A9LuxrenderGPU :
http://www.luxrender.net/wiki/index.php?title=Luxrender_and_OpenCL#LuxrenderGPU 
http://www.luxrender.net/wiki/index.php?title=Image:LuxrenderGPU-first-rendering.jpg and 
Luxrays
http://www.luxrender.net/wiki/index.php?title=LuxRays
I want to move from the days of experiments and demos to applications useful on the field.
DirectX 11
Soumit par Administrateur le vendredi, 07/01/2010

 

DirectX 11 
Microsoft Windows Vista / Windows 7
Cartes graphiques ATI Radeon 5000 series
Janvier 2010


DirectX 10 a été un flop. Les avantages visuels ont été jugés trop minces par les éditeurs et le fait que Windows XP - qui aujourd'hui encore est le système d'exploitation majoritaire - ne soit pas compatible avec cette API sont les raisons principales de cette défection. Cela n'a pas refroidi Microsoft pour autant et DirectX 11 est disponible depuis le lancement du septième opus de son système d'exploitation. La compatibilité avec Vista est un bon point, mais n'effacera pas le fait que Windows XP soit toujours délaissé. Parier sur le fait que DirectX11 ne soit disponible que sur les systèmes récents encourage les utilisateurs à migrer vers Seven est assez courageux compte tenu de ce qui s'est passé pour Vista et DirectX 10. Mais la logique des entreprises qui ont un monopole absolu est parfois difficile à saisir ! Bref, pour profiter de DirectX 11, vous devez migrer vers un Windows récent et une carte graphique compatible.

Là encore la situation est atypique : nVidia qui depuis de nombreuses années monopolises les meilleures places au classement des performances 3D n'a pas de carte 3D DirectX 11 a son catalogue. Septembre 2009, Octobre 2009, Novembre 2009, Décembre 2009, Janvier 2010 : les mois doivent sembler longs pour les partisans de nVidia ! Le constructeur a bien l'intention de supporter DirectX 11, mais les aficionados attendent toujours et doivent se tourner vers ATI s'ils veulent goûter aux joies de DirectX 11. Heureusement pour eux, peu de jeux vidéo tirent encore profit de la nouvelle API 3D de Microsoft.

Dynamic Tesselation (hull & domain shaders)

Pour rendre à l'écran des objets très détaillés il y a plusieurs solutions :
- utiliser des modèles dont la définition géométrique est très détaillée (des personnages avec rides réalisés dans ZBrush pèsent plusieurs millions de polygones). Les cartes actuelles sont incapables d'afficher de tels objets dans un jeu.
- utiliser des modèles simplifiés à qui l'on applique une normal map ou paralax map afin de faire varier la lumière à la surface de l'objet pour simuler la présence de détails géométriques. Cette méthode est la plus utilisée actuellement les jeux de génération DirectX 9 en font un usage sans modération. En revanche, cet effet montre de nombreuses limites : les profils sont désespérément anguleux...
- utiliser des displacement maps : cette fois-ci la texture ne vient pas modifier la réflexion de la lumière mais vas déformer la géométrie. A la manière d'une heightmap qui vient créer un relief, le displacement va déformer la géométrie. La limitation de cette méthode vient du fait que la géométrie ne peut varier que dans le sens de la normale des surfaces. De surcroît, la qualité de l'effet dépend directement de la densité de la géométrie. C'est à ce niveau que DirectX 11 fait toute la différence !

 

credit : Unigine 

 

DirectX 10 avec les geometry shaders avait préparé le terrain (Instanced Tesselation). Citons encore Matrox Parhelia Displacement Mapping, et ATI TrueForm, RT Patch... DirectX 11 va encore plus loin, cette API va unifier les méthodes précedentes. Aujourd'hui, les développeurs de jeux vidéo pensent à remplacer les normal map par des heightmap ; la qualité graphique des jeux va ainsi passer un cap.

L'avenir de la tesselation dynamique ne passe pas nécessairement par les heightmaps. Cette fonction permet ainsi d'utiliser en temps réel - dans un jeu ou une application DCC/CAD - la subdivision des maillages. Elle peut également agir sur la précision de la représentation à l'écran des surfaces paramétriques. Avec la tesselation dynamque, le CPU est soulagé et l'on peut entrevoir des gains de performances significatifs.

La fin des LOD :
La tesselation permet d'adapter le niveau de subdivision d'un objet à la distance de la caméra.

Une meilleure précision des modèles géométriques DCC et CAD :
Lorsque vous zoomer sur un objet l'odre de la surface paramétrée ou le niveau de subdivision du maillage peuvent augmenter sans surcharger le CPU.

Dans les prochaines années :
La tesselation permettra à l'infographiste de travailler sur des mesh très détaillés (exemple : Zbrush) et le moteur de jeu calculera dynamiquement la résolution géométrique de l'objet en fonction de son importance sur l'écran. Contrairement à la méthode précédante qui vise à ajouter de la géométrie, ici il s'agit de simplifier des modèles en temps réel. Quand on sait que les outils qui réalisent cette opération sur le CPU requièrent quelques secondes sur des mesh d'un million de triangles, nous ne sommes pas prêt de voir cela en temps réel, même sur GPU. Cette étape facilitera la création de modèles 3D, mais le GPU devra faire des gains de performance considérables pour rendre cela possible.

 

Order Independent Transparency

 

 
Order Independent Transparency off                                                  Order Independent Transparency on
credit : AMD
 

 

AMD n'a publié que très peu d'information sur cette fonctionnalité (brevetée par nVidia) qui serait - d'après le constructeur - imputable à DirectX11. Toujours est-il que le "tri des alpha" est un problème récurant. Le Z-buffer stocke les couleurs des pixels les plus proches, mais pas la transparence du coup la plupart des moteurs 3D sont incapables de rendre correctement la superposition de matériaux transparents. Certains moteurs utilisent des classements pour déterminer l'ordre des objets, mais dans le cas d'objets en mouvement cette méthode n'est pas satisfaisante. Le fait que DirectX11 corrige ce problème est très encourageant et réduit encore un peu plus le fossé qui sépare le Z-buffer et le ray tracing.

Les autres nouveautés présentes dans DirectX 11 sont : Shader Model 5.0, HDR Texture Compression, le multithread (DirectX11 est ainsi un peu plu rapide que DirectX10, Unigine Heaven demo montre un gain d'environ 3-5 %) et DirectCompute (l'équivalent d'OpenCL). En toute logique les jeux DirectX11 devraient se multiplier dès lors que les cartes nVidia compatibles seront disponibles sur le marché. Déjà quelques jeux en tire parti (Dirt2). Les moteurs pour Serious Games et OpenSource devraient progressivement s'y conformer. Il ne faut cependant pas tabler sur une disponibilité réele avant 2011. Qui a dit que DirectX11 était pour 2011 ?

 

Liens
http://www.highperformancegraphics.org/presentations/patney-parallel.pdf 
http://developer.download.nvidia.com/presentations/2008/GDC/Inst_Tess_Compatible.pdf 
DIRT 2 [video] http://www.youtube.com/watch?v=D9p3PYOX1Vc

 

Alice Labs
Soumit par Administrateur le vendredi, 01/10/2009
Q&AElmer Bol,
Alice Labs

October 2009
 

? There are multiple triple A title games that use scanning too, such as for roads of grand prix tracks.
Point clouds are important to these kinds of projects because so much relies on the composition of real world footage and Cg. When there is an accurate match, the results will be much better. Alice Labs offers the possibility of point clouds in Max and Maya and let you thereby bypass crooked software solutions. 
?

< Studio Clouds plugin (Autodesk Maya)

  
Q1Could you please give a brief presentation of Alice Labs?
A1Alice Labs is a young fresh company with sophisticated expertise. Although the company itself is recently founded, the initiators were involved in the early days of mid-range laser scanning . They come from different backgrounds and expertise with academic degrees, ranging from game engine development to chemistry. Currently 6 people are involved in the development process of the Studio Clouds products.
Frustrated by the fact that no good software solutions for laser scanning appeared they joined forces in building software solutions themselves. They believe that the work-flow for building photo realistic 3D content can be considerably improved with the help of laser scanning and photogrammetry.
It started with building a point cloud engine that surpassed the current state-of-the-art. The Mirage? engine comprises the backbone of the Studio Clouds software solutions and is based on a 64-bit multi-core architecture and relies on graphic card acceleration for maximum speed. By being heavenly involved in the academic world it's possible to integrate the best algorithms in the Studio Clouds suite of products.
  
Q2

DCC software such as 3ds max mostly use triangles to represent geometry. Can point 
clouds be combined with mesh?

A2Yes, we visualize the point clouds within the 3ds Max viewport as if they are part of the max scene. However they are not handled like the meshes, the properties of the point cloud are handled through the Studio Clouds Plug-in. For actual editing of point clouds the Studio Clouds Editing module can be used.
  
Q3Can Studio Clouds generate mesh from point clouds data?
A3

Not currently, although we are working on a module for modeling which will include meshing. However there are already great solutions for creating meshes such as using the render of Max to calculate displacement maps (cloud vs planes) or to model on top of the point cloud with the tools provided in the plug-in.

  
Q4Point clouds can be generates from laser scanner, those equipments used to be very expensive. Do you believe that it will become more and more affordable ?
A4

Yes, because they are built in larger quantities and there is more competition. As a result the price will come down. But there is also a huge market for other point cloud acquiring solutions such as photogrammetry and computer vision.

  
Q5Could you please give us few examples where point clouds have been successfully used?
A5

Large parts of the castles you see in the Harry Potter Films are based on scanned data. Scenes form lord of the rings would not be possible without laser scanning, they used massive scanning for prop and digital terrain models. There are multiple triple A title games that use scanning too, such as for roads of grand prix tracks.
Point clouds are important to these kinds of projects because so much relies on the composition of real world footage and Cg. When there is an accurate match, the results will be much better. Just like for Building and Constructions (BIM) projects, when you design something you want it to fit reality, point clouds offer you that precision. Alice Labs offers the possibility of point clouds in Max and Maya and let you thereby bypass crooked software solutions.

  
Q6Can point clouds be converted to Voxel? It seems that those two methods are similar, isn't?
A6Yes, there are similarities but they are handled differently. We prototyped different engine solutions and what we currently have is best for performance.
  
Q73D point clouds data are often partial (occlusion problems) is it possible to "fill holes" with Studio Clouds?
A7Yes, by modeling the missing geometry in Max or Maya or by using the copy tool in the Editor. It is not automatically done but new solutions will be integrated in the modeling module.
  
Q8Is it possible to render point clouds with Studio Clouds? Does it support materials?
A8Yes, we are proud to claim that we are the first to integrate off-line rendering options in Max with massive point clouds. So point clouds can be used to cast shadow or reflect on surfaces.
  
Q9 Do you plan to develop a standalone version of Studio Clouds, or versions for Softimage, Cinema4D?
A9We are developing a stand alone version of Studio Clouds with modules such as registration, modeling and editing. It will effectively be a stand alone point cloud processing solution. We also want to be the first company to have a point cloud solution for Mac OSX.
Too few people requested Softimage or Cinema4D. Our engine is flexible enough to adapt to these packages so if there is interest we will consider it.
  
Q10Point clouds needs a lot of memory : could you tell us a bit more about hardware requirements (32/64 bits systems, memory, graphic card?)
A10In general an average recent gaming graphic card performs very well with regards to the software, however the newer the core the better the performance.
There are huge advantages to be gained by using better hardware and new technologies such as solid state disks! Our engine can use as much internal memory as you want because of our 64bit core. We stream data from hard-disk but the more internal memory you will use the better the performance. But our streaming technology is refined enough to perform very well on a mid-range system too.
Alice Labs
Soumit par Administrateur le vendredi, 01/10/2009
Q&AElmer Bol,
Alice Labs

October 2009
 

? There are multiple triple A title games that use scanning too, such as for roads of grand prix tracks.
Point clouds are important to these kinds of projects because so much relies on the composition of real world footage and Cg. When there is an accurate match, the results will be much better. Alice Labs offers the possibility of point clouds in Max and Maya and let you thereby bypass crooked software solutions. 
?

< Studio Clouds plugin (Autodesk Maya)

  
Q1Could you please give a brief presentation of Alice Labs?
A1Alice Labs is a young fresh company with sophisticated expertise. Although the company itself is recently founded, the initiators were involved in the early days of mid-range laser scanning . They come from different backgrounds and expertise with academic degrees, ranging from game engine development to chemistry. Currently 6 people are involved in the development process of the Studio Clouds products.
Frustrated by the fact that no good software solutions for laser scanning appeared they joined forces in building software solutions themselves. They believe that the work-flow for building photo realistic 3D content can be considerably improved with the help of laser scanning and photogrammetry.
It started with building a point cloud engine that surpassed the current state-of-the-art. The Mirage? engine comprises the backbone of the Studio Clouds software solutions and is based on a 64-bit multi-core architecture and relies on graphic card acceleration for maximum speed. By being heavenly involved in the academic world it's possible to integrate the best algorithms in the Studio Clouds suite of products.
  
Q2

DCC software such as 3ds max mostly use triangles to represent geometry. Can point 
clouds be combined with mesh?

A2Yes, we visualize the point clouds within the 3ds Max viewport as if they are part of the max scene. However they are not handled like the meshes, the properties of the point cloud are handled through the Studio Clouds Plug-in. For actual editing of point clouds the Studio Clouds Editing module can be used.
  
Q3Can Studio Clouds generate mesh from point clouds data?
A3

Not currently, although we are working on a module for modeling which will include meshing. However there are already great solutions for creating meshes such as using the render of Max to calculate displacement maps (cloud vs planes) or to model on top of the point cloud with the tools provided in the plug-in.

  
Q4Point clouds can be generates from laser scanner, those equipments used to be very expensive. Do you believe that it will become more and more affordable ?
A4

Yes, because they are built in larger quantities and there is more competition. As a result the price will come down. But there is also a huge market for other point cloud acquiring solutions such as photogrammetry and computer vision.

  
Q5Could you please give us few examples where point clouds have been successfully used?
A5

Large parts of the castles you see in the Harry Potter Films are based on scanned data. Scenes form lord of the rings would not be possible without laser scanning, they used massive scanning for prop and digital terrain models. There are multiple triple A title games that use scanning too, such as for roads of grand prix tracks.
Point clouds are important to these kinds of projects because so much relies on the composition of real world footage and Cg. When there is an accurate match, the results will be much better. Just like for Building and Constructions (BIM) projects, when you design something you want it to fit reality, point clouds offer you that precision. Alice Labs offers the possibility of point clouds in Max and Maya and let you thereby bypass crooked software solutions.

  
Q6Can point clouds be converted to Voxel? It seems that those two methods are similar, isn't?
A6Yes, there are similarities but they are handled differently. We prototyped different engine solutions and what we currently have is best for performance.
  
Q73D point clouds data are often partial (occlusion problems) is it possible to "fill holes" with Studio Clouds?
A7Yes, by modeling the missing geometry in Max or Maya or by using the copy tool in the Editor. It is not automatically done but new solutions will be integrated in the modeling module.
  
Q8Is it possible to render point clouds with Studio Clouds? Does it support materials?
A8Yes, we are proud to claim that we are the first to integrate off-line rendering options in Max with massive point clouds. So point clouds can be used to cast shadow or reflect on surfaces.
  
Q9 Do you plan to develop a standalone version of Studio Clouds, or versions for Softimage, Cinema4D?
A9We are developing a stand alone version of Studio Clouds with modules such as registration, modeling and editing. It will effectively be a stand alone point cloud processing solution. We also want to be the first company to have a point cloud solution for Mac OSX.
Too few people requested Softimage or Cinema4D. Our engine is flexible enough to adapt to these packages so if there is interest we will consider it.
  
Q10Point clouds needs a lot of memory : could you tell us a bit more about hardware requirements (32/64 bits systems, memory, graphic card?)
A10In general an average recent gaming graphic card performs very well with regards to the software, however the newer the core the better the performance.
There are huge advantages to be gained by using better hardware and new technologies such as solid state disks! Our engine can use as much internal memory as you want because of our 64bit core. We stream data from hard-disk but the more internal memory you will use the better the performance. But our streaming technology is refined enough to perform very well on a mid-range system too.
CAP DIGITAL
Soumit par Administrateur le mercredi, 01/09/2009
 Q&AGaĂ«lle COURAUD,
CAP DIGITAL

Septembre 2009
 

?Le Siggraph est à ce jour la manifestation internationale sur la 3D, la plus importante en terme de conférences, de publications scientifiques, de diversité et de qualité des intervenants. La grande majorité des directeurs techniques des studios d'animation 3D, VFX, d'édition de logiciels 3D français, s'y rendent chaque année. Notre action vise à apporter la meilleure visibilité à l'international pour ces entreprises, laboratoires et établissements de formation supérieure.?

< CAP DIGITAL, Siggraph 2009

  
Q1En quelques mots pouvez-vous nous décrire l'action de Cap Digital ?
A1Cap Digital est le pôle de compétitivité des contenus et services numériques. Il compte plus de 600 adhérents (430 PME, 20 grands groupes et 170 laboratoires de recherche) et couvre 9 communautés de domaine :
- e-Education
- Jeu Vidéo,
- Ingénierie des connaissances,
- Culture, Presse et MĂ©dia ,
- Image, Son et Interactivité,
- Services et Usages,
- Robotique,
- Logiciel libre, Coopération et nouveaux modèles
- Design numérique.

Cap Digital aide ses adhérents à construire et développer leurs projets de Recherche et Développement : ainsi depuis sa création en 2006, Cap Digital a reçu au total 738 projets de R&D, et en a labellisé 226. Ces projets ont été financés à plus de 450M? dont plus de 200M? d?aides. Le pôle met en place des plate-formes mutualisées telle que la plateforme Très Haut Débit. Plus généralement nous proposons à nos adhérents des services sur mesure : un réseau sur lequel s?appuyer, l?accompagnement nécessaire dans la conquête de nouveaux marchés, l?accès à des ateliers de qualité (Marketing, financement, business plan, Innovation, etc.), « Think Digital » : notre Think Tank, une veille sectorielle, une plus grande visibilité à l?international notamment à travers le stand Cap Digital du Siggraph, etc?

  
Q2

Cap Digital occupe un large espace au Siggraph (comparable Ă  celui de nVidia). Le Siggraph est-il toujours un Ă©vĂ©nement incontournable pour la 3D?

A2Le Siggraph est à ce jour la manifestation internationale sur la 3D, la plus importante en terme de conférences, de publications scientifiques, de diversité et de qualité des intervenants. La grande majorité des directeurs techniques des studios d'animation 3D, VFX, d'édition de logiciels 3D français, s'y rendent chaque année. Notre action vise à apporter la meilleure visibilité à l'international pour ces entreprises, laboratoires et établissements de formation supérieure. Notre stand de 54 m2 (600 pieds carrés) accueille 10 exposants cette année, ce qui fait une moyenne de 5m2 pour chacun d'eux.
  
Q3Avant la création de Cap Digital, est-ce que les entreprises françaises et les jeunes pousses avaient une présence au Siggraph?
A3

Certaines entreprises françaises comme Total Immersion par exemple prenaient un stand individuel, ce qui leur demandait un effort financier important. Thierry Frey et Paris ACM Siggraph ont réalisé un premier stand commun il y a 4 ans, l'INRIA a aussi par le passé monté un stand commun avec plusieurs laboratoires.
Notre originalité est d'avoir la présence sur un même espace, d'entreprises, petites et grandes (Thalès), de projets (HD3D, Sebastian 2), et d'écoles (Georges Méliès, l'Institut Telecom), et de membres un peu particulier de Cap Digital comme la Cité des Sciences.

  
Q4Cap Digital s'adresse aux entreprises de la région parisienne, soutenez-vous malgré tout des entreprises provinciales qui opèrent dans la 3D?
A4

Oui, cette année, la société LongCat membre du pôle « Imaginove » et implantée à Chalons sur Saône était présente. En 2008, l?IRISA de Rennes, membre du pôle « Images et Réseaux » était présente sur notre stand au Siggraph de Los Angeles. Pour ce qui est des projets, nous avons plusieurs projets FUI (Fond Unique Interministériel) ou ANR (Agence Nationale de la Recherche) dans lesquels des entreprises qui sont en dehors de la région parisienne sont impliquées et financées. Le statut de membre associé permet à une entreprise basée hors IDF d?être membre de Cap Digital. Deux différences par rapport au statut standard : il ne donne pas de droit de vote aux Assemblée Générale et la cotisation représente 50% de la cotisation standard.

  
Q5Le plan de relance prévoit pour l'économie numérique une enveloppe de 20 M d'Euro dédiée aux Serious Games, cap-digital a t-il des projets dans ce domaine?
A5

Oui tout à fait. Sur 48 projets Serious Gaming retenus, 23 projets ont été soumis à Cap Digital et 15 d?entre eux ont été labellisés. Les projets labellisés Cap Digital retenus recevront 7 millions d?euros d?aides. Ces résultats montrent la qualité des projets déposés et montés chez Cap Digital, ce qui en fait le 1er pôle de compétitivité en termes de nombre de projets, de budget et d?aides pour ces domaines. Il faut noter la richesse des rencontres entre les membres de Cap Digital et plus largement entre les sociétés de son écosystème qui ont permis d?aboutir à la création de nombreux projets.
Nous espérons, au sein du pôle, que les collaborations qui se sont créées à l?occasion de cet appel à projets, porteront d?autres fruits.
Nous remercions aussi ReadWriteWeb, Silicon Sentier et TechCrunch qui ont été partenaires pour les réunions d?informations et de soutien. Les opportunités de rencontres initiées par ces évènements, et lors de Futur en Seine, ont permis la création de nombreux projets aujourd?hui sélectionnés.
Voici la liste des projets Serious Gaming labellisĂ©s par Cap Digital et retenus est accessible : Ă  l?adresse http://www.capdigital.com/wp-content/uploads/Liste_des_projets_SeriousGame_retenus.pdf.

  
Q6Le directeur d'Autodesk Media & Entertainment a dans une intervention vidéo parlé de la nécessité de créer des outils plus productifs compte tenu du contexte économique actuel. Percevez-vous cette préoccupation au sein des entreprises de Cap Digital?
A6Oui, et les applicatifs de type : gestion de projet, gestion des assets, outils spécifiques sur la réut, ? sont des éléments présents dans de grands projets portés par le pôle comme « HD3D-IIO » dont les premières présentations publiques se dérouleront d?ici 4 ou 5 mois. Marc PETIT porte plusieurs visions ou réflexions communes avec les nôtres. Nos studios de création anim 3D, VFX, JV, utilisent tous des logiciels d?Autosdesk et plusieurs d?entre eux créent les maillons qui ne sont pas commercialisés, pour être à la pointe de la compétitivité !
  
Q7Sur le salon Siggraph 2009, quel a été l'impact de votre présence? Les visiteurs ont-ils été sensibles au "french touch" en matière de 3D?
A7Les exposants du stand commun Cap Digital nous ont tous confirmés leur satisfaction à la fin de l?exposition. Mettre face au stand de Pixar et à son moteur Renderman, une jeune entreprise comme Mercenaries Engineering qui développe son propre moteur de rendu est assez excitant ! Surtout quand une chaine de TV américaine vient les filmer. Leurs images de tests projetées sur l?écran de 3 mètres du stand ont attiré de nombreux visiteurs. En 3 jours ils n?ont pas pu quitter leur kiosque, ils recevaient des visiteurs en permanence.
  
Q8L'industrie du jeu video a bénéficié d'aides, pensez-vous que les dispositifs sont suffisants pour les autres domaines de la 3D?
A8Le secteur du jeu vidéo bénéficie depuis 1 an du crédit d?impôt Jeu Vidéo de 20% des dépenses éligibles France, et qui est comparable à ce que le cinéma et l?audiovisuel ont avec le crédit d?impôt cinéma à 20% et qui est en train d?être étendu à l?international. Pour l?instant le Jeu Vidéo ne dispose pas d?aides de types « Compte de soutien aux Industrie de programme » ou « avances sur recettes » ?
OpenCTM
Soumit par Administrateur le mercredi, 01/09/2009
 Q&A

OpenCTM, 
Marcus Geelnard
Septembre - 2009

 

?OpenCTM can achieve, a 10 million polygon mesh compresses to under 6% of its corresponding STL file size. 
[...] The real advantage of OpenCTM, as I see it, is that you can easily use it as part of a much more complex data structure, such as a custom 3D model or scene description format for a game engine or visualization tool.?

<Stanford Bunny

  
Q1Why is OpenCTM more than just another 3D format?
A1

Unlike many other 3D formats, OpenCTM combines excellent compression and a very flexible data structure, all in an open file format and an easy to use open source SDK.

  
Q2Compare with other mesh format, OpenCTM offers a very good compression rate. Is it a lossless format? Is there any simplifications in coordinates values?
A2

Yes, it is a lossless format. No vertices are dropped in the compression process. If you want lossy compression, it is of course possible to combine OpenCTM with any polygon reducer to achieve even higher compression ratios.
For an example of the level of compression that OpenCTM can achieve, a 10 million polygon mesh (the Stanford Thai Statue) compresses to under 6% of its corresponding STL file size.

  
Q3Could you please tell us what are the main features of OpenCTM (vertex colors, textureUV, animation, ...)?
A3

When it comes to vertex data OpenCTM tries to be as flexible as possible in order not to limit its use to any specific application.
In addition to the most basic mesh information (coordinates, normals and connectivity), the format can hold any number of UV maps (e.g. for 2D texture coordinates) and any number of custom per vertex attributes (this can really be anything, for instance colors, pre-calculated ambient occlusion, weights, and shader-specific attributes).
There is currently no explicit support for animation, but since it is possible to define per vertex weights, you can combine OpenCTM with an externally described skeleton animation system (i.e. one weight attribute per skeleton bone). This way you are free to use OpenCTM in almost any kind of skeleton based animation system.

  
Q4What is the main target of OpenCTM?
A4

Right now, I think the primary audience is 3D software developers, who can integrate the technology into their products in various ways.
For instance, OpenCTM can be used in situations where you use simple 3D mesh formats such as STL or Stanford PLY today. The immediate advantage would of course be a drastically smaller file size, which can be convenient in several situations (e.g. for scientists that want to store and share large 3D data sets, or in production environments where you need to transfer or store large amounts of 3D meshes).
The possibilities do not stop at replacing simple file formats though. The real advantage of OpenCTM, as I see it, is that you can easily use it as part of a much more complex data structure, such as a custom 3D model or scene description format for a game engine or visualization tool. An analogy: just as the OpenDocument Format uses XML for text and layout and PNG files for images, a 3D model format could consist of an XML file for object properties and materials, and OpenCTM files for the object geometries.
In the long run, I hope that OpenCTM will be available to many different user groups, directly or indirectly, but of course that depends on how well it is received by the software development community.

  
Q5What are the MG1 and MG2 method of OpenCTM? Can it be compared with polygon reduction tools such as Mental Mesh or OpenMesh?
A5As OpenCTM is a lossless format, it does not perform any geometry reduction. All triangles and vertices are preserved, regardless of compression method. However, OpenCTM provides different methods of compression in order to trade speed, memory, compression ratio and precision. Both the MG1 and MG2 methods use lossless triangle reordering techniques and LZMA compression to reduce the size of the connectivity information.The main difference between the MG1 and the MG2 methods is that the MG1 method stores vertex data as floating point values, while the MG2 method uses fixed point. The latter allows for dramatically improved compression mainly because lossless prediction techniques can be used (not entirely different from the PNG compression scheme, for instance). The whole idea is that the LZMA coder likes smaller values, so the MG2 method tries to minimize the value range of the vertex coordinates (and other vertex attributes).

From a user point of view, this means that the MG1 method can be used if the floating point data must be preserved at all cost, while the MG2 method can be used when you know what numerical precision you need. In the future, more compression methods may be added to the OpenCTM file format, but none are planned right now.

  
Q6OpenCTM offers a complete set of tools : viewer, conversion tool and developer files. OpenCTM is a good alternative to other format such as Collada. Do you plan to add some conversion tools for Collada?
A6Yes, several users have already requested a COLLADA converter, and it is being looked at right now. I will continue to add converters and import/export plugins for different applications when I have the time. People are of course encouraged to help out with the development. For now, there are a couple of simple Blender scripts available for importing and exporting OpenCTM files, so you can use Blender as a poor mans converter too.
  
Q7As OpenCTM is very compressed it makes suitable for internet use. Can it be handled by an online 3D engine (in Java or Flash)?
A7Internet applications were certainly on my mind when I designed OpenCTM. We always seem to be short on network bandwidth, which means that internet 3D applications can possibly gain a lot of performance by using 3D compression. Right now, there is no direct support for Java or Flash. For Java, it is possible to compile a Java enabled shared library of OpenCTM for all platforms that your application needs to support (this is how LWJGL works, for instance), but I think that the ideal solution would be to port OpenCTM to Java completely. I certainly hope to see this happen in the future. Again, time is the only limiting factor.
  
Q8Could you please tell us a bit more about your goal with OpenCTM and what are your next milestones?
A8First of all, I would like to think of OpenCTM as a compression technology that can be used in many different applications, not only because of its technology, but also because of its openness. For me it has been a way to make my invention available to the public, and I hope that people will find it useful.
The first near term milestone is the release of OpenCTM 1.0, which will basically be the same as the current release, with minor corrections and additions based on the feedback from users.
The next step will be to improve the support for OpenCTM in various applications, through converters, programming language bindings and porting it to new platforms (e.g. Java).
What happens next is really up to the 3D software developer community. I would not be surprised if OpenCTM will be used in ways that I have never imagined.
  
Mental images RealityServer
Soumit par Administrateur le dimanche, 01/08/2009
 Q&A

Reality Server, 
Ludwig von Reiche, Chief Operating Officer for mental images
August - 2009

 

?RealityServer allows users to access and interact with highly complex imagery that is not reliant on the user?s limited desktop and laptop capabilities [...] data remains secure with RealityServer as manipulations and changes to the data can only be saved back to the server ?

< mental images RealityServer

  
Q1RealityServer is a server-side technology. What are the benefits of rendering images on the server vs client?
A1

As a server-based technology, RealityServer allows users to access and interact with highly complex 3D data in a manner that is not reliant on the user?s limited desktop and laptop capabilities. A server can store immense amounts of data, in terms of memory, and can therefore house 3D data that most client computers are unable to. Equally as important, such data remains secure with RealityServer as manipulations and changes to the data can only be saved back to the server and not on a user?s hard drive, keeping all highly confidential blueprints, product designs, and maps safe. Imagery is also delivered immediately to the client, without the need to stream or download large amounts of 3D data.

  
Q2For the client : what are the requirements to display 3D graphics rendered by the RealityServer?
A2

Clients only need an Internet connection and a web browser to access the imagery rendered by RealityServer. The data exchanged between the client and the server, through RealityServer, is done so by a simple http protocol request which then produces an image that is sent to the client.

  
Q3About the server : does RealityServer need dedicated hardware? Can it handle the power of Tesla units? How many simultaneous users can be connected to RealityServer?
A3

RealityServer does not utilize dedicated hardware on the client side as it relies on the server side for all significant computing power. This allows the data to be accessed from any computer, whether it is a desktop or a mobile device. RealityServer itself does not place limits on the size of the 3D data and there is no inherent limit to the number of simultaneous users accessing the data. Obviously, the server should be robust and designed to house large sets of data, which most companies make sure to do. As images get larger, with newly added data for extra pixels for example, RealityServer can scale very easily. RealityServer is also able to take advantage of GPUs to speed up global illumination, which is normally very expensive otherwise, and its iray? rendering mode - a soon to be incorporated global illumination ray-traced and interactive rendering technology that generates photo real imagery by simulating the physical behavior of light - requires a CUDA 1.1 compliant NVIDIA GPU. Quadro, Quadroplex and Tesla are preferred hardware platforms.

  
Q4RealityServer is available since half a decade. Can RealityServer be used on "mainstream internet site"?
A4

RealityServer is already being used on mainstream Internet sites by companies who want to offer their customers a unique online experience. Scenecaster, a leading provider of 3D social media applications, uses RealityServer to let their users customize fun, personalized environments on their social networking profiles. Andmydeco, the new London-based interior design website, uses RealityServer to offer interior decorators with their online tool called ?Complete Room Planner? where users can design living spaces from the wallpaper to the furniture using a library of d?cor that can then be purchased for real world use.



< User generated living room, designed onmydeco.com, in real-time, using RealityServer

 

  
Q5Is this technology affordable for small studios?
A5With the security RealityServer brings to sharing confidential documents, RealityServer is a smart investment for companies of all sizes. New studios in particular, may be working on one project only. If that project were to be compromised by even a small leak ? possibly a file image that is sent to the wrong email address -- then a competitor could get ahold of it and the entire company may go under. Small businesses are sensitive to risk and RealityServer is often less of an expense to keep projects safe than more complicated, traditional solutions.
  
Q6Is it easy to integrate RealityServer contents into Flash or Silverlight applications?
A6RealityServer 2.3 includes a standards based Web Services Framework that makes the technology quite easy to integrate with Adobe Flash, Microsoft Silverlight or any development technology which supports standard Web Services. The framework comes with a comprehensive documentation system and reusable client libraries for Adobe Flex and Microsoft Silverlight. Popular standards such as SOAP, JSON-RPC and REST are supported by the framework.
  
Q7Could you please describe RealityServer engine: is it a mentalray or a raster render?
A7RealityServer ships with a variety of rendering technologies enabled for various user requirements. These include a powerful, ray-tracing engine with support for programmable shading (MetaSL) and advanced lighting, a more conventional GPU based rasterizer with programmable shading (again using MetaSL), a GPU based non-photorealistic Sketch renderer for stylized line rendering and more recently a rendering option based on our new iray technology for ?push-button? photo-real rendering that is capable of fully exploiting GPU computing power for fast results and interactive refinement. Additionally RealityServer 2.3 introduced NVIDIA CUDA based acceleration of Ambient Occlusion and Image Based Lighting for the GPU rasterizer.
  
Q8RealityServer 2.3 improve the speed of image rendering ; but is server-based rendering fast enough for moving, rotating 3D objects?
A8A user can efficiently rotate and manipulate images within RealityServer, assuming the server?s performance and bandwidth capabilities are up to speed. Obviously performance will depend on a number of factors, including the server hardware used, the complexity of the data, the number of users and the quality of the network connection. When testing with multiple users we have found sub-linear degradation of performance, meaning doubling the number of users does not half each user?s performance. In cases where latency is a critical factor, several of our customers have also employed hybrid solutions, for example utilizing a simplified, low quality representation with a client-side technology such as Acrobat 3D or a Flash 3D approach and using this for latency critical interactions while displaying the higher quality RealityServer results when this type of interaction is completed. Additionally, progressive rendering can be employed to obtain initial results very quickly and then refine quality over time, reusing information from previous frames to accumulate quality over time.
  
Q9Are web technologies such as Flash (wich is more and more able to display realtime 3D objects) an alternative to RealityServer?
A9mental images does not consider Adobe Flash a competitor to RealityServer. RealityServer doesn?t depend on the client?s machine, and can therefore enhance the use of Flash and others like it. Client side 3D approaches such as those employed in Flash today will inevitably hit a wall in terms of quality, complexity and security; even with higher end client side hardware, most clients are typically not able to handle the complexity which can be tackled on the server side. Also as complexity of the data increases, client side approaches see increasing start-up times as the data must be downloaded or streamed. With server side rendering, as utilized in RealityServer, the bandwidth requirement is independent of the complexity of the data being used.
Ultimate XAML
Soumit par Administrateur le dimanche, 01/08/2009
 Q&A

Ultimate XAML, 
J Collins Design
August - 2009

 

?Microsoft has made significant investments in making 3D XAML available as a component for designing user interfaces, but we only rarely see it used partly because of the difficulties in creating content in that format. This tool is designed to help solve that problem.?

< Ultimate XAML in action

  
Q1Please give a brief description of Ultimate XAML.
A1

Ultimate XAML for Softimage is a plugin for Autodesk Softimage and Mod Tool that helps with the creation of 3D XAML content. The primary audience includes user interface developers of XBAPs (such as web games) and standalone applications. It is more than just a simple exporter; it includes a real time shader that mimics how XAML content appears in a Windows Presentation Foundation (WPF) application so that the final model can be accurately visualized during creation. The WPF shader settings correspond exactly to the settings available to the XAML material settings so there is nothing lost in translation. Other exporters we?ve looked at impose severe limitations on the types of materials they can generate or the composition of the scene; we?ve done our best to minimize these limitations and make this process as simple as possible. Scenes can have arbitrary hierarchies, objects can have local transforms, meshes can have subdivision surfaces and even instances are fully supported in the final XAML to keep file sizes down.
The plugin also includes the Ultimapper Helper which is designed to help create the image maps XAML requires, which I?ll explain more about later.
Another aspect of this plugin is the ability to create C# code that helps interface programmers use the exported objects within an application and integrate the resources with the project in a type safe manner. The main benefit of this is the compiler will produce errors if an asset that doesn?t exist is referenced by the code.
While this is the first public release of Ultimate XAML for Softimage, we initially created this tool in 2006 to solve our own development needs while creating XBAP games and other applications and we have been slowly improving it since then. We would very much like to start seeing Windows applications incorporate 3D elements more often because we think it can make for a better user experience. Not too long ago we decided to make it available to others because clearly, even though 3D XAML has been available to developers for several years now, it is not used very much. We hope this tool makes it a little easier for developers to make slick looking and fun to use applications.

  
Q2XAML is not often used to display 3D contents. What are the benefits of XAML/WPF?
A2

XAML is the standard way to integrate 3D content into the user interface a Windows Presentation Foundation application. Historically, applications that needed to present 3D content would use either pre-rendered bitmaps or Direct 3D. Pre-rendered bitmaps certainly continues to have its place in UI design but does have some obvious limitations in that dynamic 3D content isn?t an option. Integration with Direct 3D poses many implementation challenges because embedding a Direct 3D rendering system within a user interface without it becoming the user interface is very complicated. Certainly possible, but not as convenient as using XAML which enables the user interface programmer to simply embed a 3D control into the middle of an existing user interface. Any Windows PC that has .NET 3.0 or greater (which is included in all Windows Vista and Windows 7 installations and can be installed onto XP) can embed 3D XAML content directly into the user interface of an application. 3D XAML content coexists with all other user interface systems of the Windows Presentation Foundation, and 3D XAML appears the same way across all supporting systems with no special hardware requirements. Microsoft has made significant investments in making 3D XAML available as a component for designing user interfaces, but we only rarely see it used partly because of the difficulties in creating content in that format. This tool is designed to help solve that problem.

  
Q3Ultimate XAML export 3D assets from Softimage. What are the main settings to do on a Softimage scene to be well exported?
A3

The basic requirement is to create a collection of meshes which have render trees configured to use the WPF real time shader included in the plugin. The render trees can be shared or unique per mesh. The shader provides precise control of the settings of the XAML materials that are emitted by the exporter. Each render tree can have any number of WPF shaders chained together, and the exporter will produce a XAML Material Group that corresponds to how these are connected. This allows XAML objects to be constructed which use any combination of diffuse, specular and emissive materials. These are distinct material types in XAML and the WPF shaders can be configured individually to suit the artists preferences for each material in use. If the creation of a model requires the use of Mental Ray shaders, as is often the case, the images need to be baked into image files for use with the WPF shaders. This is a task very similar to game developers reducing a high resolution model into a game-ready model. The Ultimapper Helper can make that process easier by coordinating the generation of these images across multiple instances of Ultimapper objects, one per material channel.

  
Q4Ultimate XAML introduce its own material nodes. Does it also support advanced shading (such as fragment/vertex shaders)?
A4

Ultimate XAML for Softimage introduces two material nodes. The first is the WPF Shader, which enables the artist to visualize how the final XAML object will appear directly within Softimage and allows for tweaking the XAML properties in real time prior to export. The WPF shader is intentionally limited to the features available to XAML, so it only supports solid colors and image map textures used with diffuse, specular and emissive shaders. It is impossible to configure the WPF shader in a way that is incompatible with the features XAML supports. The second node is a simple switch node that is used by the Ultimapper Helper during the creation of Ultimapper images to extract the diffuse, specular and emissive channels from a standard render tree into image files for use with the WPF shaders. The Ultimapper Helper switch shader inputs can be connected to any other node output but will usually be the non-illuminated texture channel data for the material because the final XAML object will typically be lit by the application?s scene lights. You don?t need to use the Ultimapper Helper switch shader to export XAML objects, but in situations where it makes sense to use the Ultimapper Helper, it can be a real time saver for creating the images used by the XAML objects.

  
Q5When using Ultimate XAML, is it possible to export 3D data to XNA?
A5No, this exporter is specifically designed to create XAML content and XNA does not support XAML at this time. We would very much like to see XNA support XAML in the future because currently the XNA platform has a distinct lack of comprehensive user interface APIs and XAML would be an obvious choice for Microsoft to add support for. If the user?s goal is to produce general purpose 3D content for XNA, Softimage users already have the option of using the XNA add-on which is more suited for that task because it is not constrained by the limitations of XAML.
  
Q6Ultimate XAML documentation do not mention animation : does it mean that Ultimate XAML do not export object animation or bones?
A6That is correct. While XAML supports rigid body hierarchies, it unfortunately does not support bones or skinning. Ultimate XAML for Softimage can export object hierarchies with instancing which can be used by applications to configure animations procedurally but the plugin does not currently export animation data. Our internal project which originally drove the development of this plugin needed to maximize XAML animation performance, and this precluded us from using the standard XAML animation features because they are unfortunately much slower than an animation system tuned for a specific application. There is a lot of overhead that comes with the standard XAML animation features. Despite this, we are hoping to provide animation export features in a future version because clearly it would be a useful thing and many applications simply don?t require high performance animations in the UI. You can find out more about this and other limitations of the software on page 7 of the PDF documentation under ?Features and Limitations?.
  
Q7What are your future development and evolution of Ultimate XAML?
A7We have a pretty good list of features we would like to incorporate into Ultimate XAML for Softimage. Some of the major items we would like to support in the future include animation support, ICE integrated particle export, and code generation for user interfaces. Whether or not these happen depends partly on our own internal needs and partly on user feedback.
  
Q8XAML is more than just a 3D format, do you plan to support more features?
A8XAML is certainly much more than just a 3D format! In fact, it?s probably safe to say most people know XAML for almost everything except 3D. Our main focus right now is to make it easier for developers to use XAML 3D within their applications because clearly, as we learned ourselves when we saw the need for this tool, it is very difficult to get 3D content into an application unless you have the right tools. We will continue to focus on those aspects of XAML content creation which can benefit from being part of a 3D authoring environment such as Softimage and Mod Tool.
OGREMAX
Soumit par Administrateur le jeudi, 01/07/2009
 Q&AOGREMAX, 
Derek Nedelman,
July 2009
 

 

My number one goal with OgreMax has been to provide a complete path for creating and moving assets from 3DS Max, Maya, and XSI to an Ogre3D-based application. ?

< OGREMAX for Softimage in action

  
Q1Why have you developed OgreMax, a suite of tools that exports to the Ogre3D graphics engine?
A1I developed OgreMax after I began using Ogre3D and discovered it didn't have a 3DS Max exporter that allowed me to export a scene (meshes, cameras, lights, animations, among other things), process it (by changing the up axis, rescaling units, merging meshes, and so on), load it into an external viewer for inspection, and then load it into my own application. Before creating OgreMax, I tried to get this type of functionality by piecing together an assortment of exporters, viewers, and scene loaders of varying quality, and of course nothing worked very well, if at all.
  
Q2Importing assets is a key component of the 3D pipeline. What are your principal goals?
A2My number one goal with OgreMax has been to provide a complete path for creating and moving assets from 3DS Max, Maya, and XSI to an Ogre3D-based application.
  
Q3OgreMax is available for 3DS Max, but versions are also available for Maya and XSI. Why support all these applications?
A3The first version OgreMax was released over two years ago as a simple 3DS Max-only exporter. Its feature set grew rapidly and after nearly a year of development it became apparent that the original program design needed to be overhauled in order to accomodate more features (the real-time viewports, in particular). Before moving on to this phase, I decided to try out the existing Maya and XSI exporters for Ogre3D. None of them did what I wanted so I decided that since I was going to rewrite the OgreMax exporter for 3DS Max I would also create Maya and XSI versions. Having used their software development kits in the past I knew that it would be possible to implement most of the features OgreMax already had.
  
Q4With regards to porting OgreMax to the different applications, could you please give us your feeling about the 3DS Max, Maya and XSI SDKs?
A4It's easiest to summarize my feelings with a list of pros and cons for each SDK:

3DS Max SDK
+The most comprehensive SDK. It works far better than the others because it allows you access to nearly everything.
+Has a pure C++ interface which means all plugin code can be written in the same language, resulting in easier debugging and a faster run time.
+The simplest scene graph traversal.
+The simplest data storage methods.
+The simplest user interface handling. User interface elements are created within Visual Studio's resource editor, which means there's less guessing about how things will look.
+Contains lots of sample code.
+Everything is compiled into a single plugin file. This makes manually installing OgreMax very easy.
-The programming interface is complex and messy.
-Lots of code needs to be written when creating new object and material types.
-Compile times are very slow compared to the other SDKs.

Maya SDK
+Has a well-defined and clean programming interface.
+New object types require very little code.
+Easy to add custom commands.
+Compile times are the fastest of the three SDKs.
-User interfaces need to be written in separate MEL script files, which results in a slower run time.
-Creating and accessing some data types is difficult and/or strange.
-Traversing the scene graph is tricky.
-Some functionality is not accessible through the C++ API and must instead be accessed by dynamically executing script code.
-Custom viewports are not supported. There's a somewhat similar custom renderer feature that OgreMax makes use of, but it's slow and not very useful.

XSI SDK
+New object types require very little code.
+Easy to add custom commands.
+Fast compile times.

-Custom materials and shaders must be implemented in separate script files, which is error prone since all the code that accesses the material data is written in C++.
-User interfaces need to be written in separate script files, which results in a slower run time.
-Parts of the user interface in the property page cannot be destroyed and recreated dynamically.
-XSI's custom user interface styling causes problems with many of the OgreMax dialogs. The SDK provides a way to toggle the custom styling, but this results in the OgreMax dialogs looking completely different than the XSI dialogs.
-Inconsistent notifications. Some internal notifications only occur if at least one embedded OgreMax scene window is open. This requires workarounds in the code for the case that no such windows are open.
-No way to hook into material editor 'node' menus. This makes creating OgreMax materials a little more time consuming for the user than they need to be.
-Some functionality is not accessible through the C++ API and must instead be accessed by dynamically executing script code.

  
Q5OgreMax exports most of the 3D scene details: objects, animation, bones, lights, cameras. It also has a real-time preview (through the use of custom scene viewports) that allows users to view the scene before exporting. These are very advanced features for a free tool. Which feature of OgreMax are you the most proud of?
A5I'm proud of a number of things:
-I'm proud that there are now three OgreMax exporters that provide what I originally wanted: a path for creating and moving assets from a 3D content creation application to my own application.
-I'm proud that all three exporters offer such an extensive feature set and uniform user interface. The Maya and XSI exporters are identical, and they each have roughly 95% of the features that the 3DS Max version has. This makes it very easy for artists to move from one tool to another if necessary.
-I'm proud of the fact that so many people have adopted OgreMax as their primary toolset.
  
Q6 Ogre3D .scene and .mesh formats specifications change frequently (OgreMax supports Ogre 3D 1.6 & 1.7). Do you think that they are the best formats for a 3D pipeline? What about intermediate formats such as FBX or Collada?
A6Any successful format needs to evolve over time, so the fact that .mesh and .scene are changing is a very good sign. It's shows that the formats are being adapted to people's needs.
The FBX and Collada formats are perfectly acceptable as an interchange format for 3D content creation applications, but they will typically not be used as a final format. For example, if you were developing a game you would most likely write tools to process FBX/Collada files, removing excess data and eventually saving to a format that your game can read. There are a couple of problems with this approach, however:
1)You still need to write tools - Depending on your needs, you may be able to get away with something simple such as reading the FBX/Collada files and then writing out a few lists of data. On the other hand, you may need more functionality, such as being able to join together meshes, or rescale the scene. These types of tools may require a graphical user interface, which can take more time to create. Also, many types of operations seem simple at first but become very complex. For example, rescaling a scene seems simple until you realize that rescaling affects mesh vertices and all their animations, skeletons, cameras, lights, and all their related properties. Creating and maintaining the tools can become a full-time task.
2)Designing file formats is difficult - Again, if your needs are simple then this may not be an issue, but for all but the simplest programs creating and maintaining file formats can be even more difficult than creating the tools that use those formats.
3)Do you really want to create tools? - Why do all that work if you don't have to?
For most situations the Ogre3D formats are the best choice since they have been in use for years, and there are already tools such as OgreMax you can use to generate that type of data.
  
Q7 What is the license of OgreMax? Are you proposing commercial services around OgreMax and Ogre?
A7The OgreMax exporters are free to use for any purpose, with the exception that they not be redistributed or resold. The OgreMax viewers and their source code are free to use for any purpose with the condition that you give credit in your application to OgreMax.
By the middle of August, the exporter source code will be available for licensing. In addition, paid support will be offered.
Q8Ogre3D benefits of a large user community and has a lot of tools for exporting assets. But more generally, do you think that the creation of export tools is the job of the 3D application creator (Autodesk, for example) or the 3D engine maker?
A8The best thing 3D application creators can do is create full featured SDKs for their products that allow 3D engine makers or anyone else to create the type of tools they need.
  
  
OSG Composer
Soumit par Administrateur le jeudi, 01/07/2009
 Q&AOSG Composer

Ashraf Sultan, Simulation Lab Software
July 2009
 

 

OSG Composer targets two main user groups, the advanced 3D users, who use OSG Composer as a tool to fill a gap in their 3D pipeline. The other group is CAD users, who are looking for an inexpensive way to view and share CAD files. This group will find OSG composer a high quality tool to do the job, without the high price tag they are used to in the CAD arena. ?

< OSG Composer (Windows)

  
Q1OpenSceneGraph (OSG) is a 3D engine wellknown in the field of gaming, serious gaming and simulation. Is OSG revelant when it comes to display high detailed CAD data?
A1When Simulation Lab Software started on what was then called SimLab Composer, we discussed the idea, should we use OpenGL directly or go through a high level library. At the beginning, we decided to use OpenSceneGraph as a thin layer to utilize the functionality already implemented in OSG. The more we used OSG, the more useful we found it for creating a CAD viewer, and a 3D scene Composer. It provided us with great functionality including, picking, a framework for creating 3D manipulators, out of the box culling, great GUI library integration, and much more. All this makes OSG great for creating CAD software, and I guess we will see more and more OSG based CAD applications in the future.
  
Q2What can OSG Composer do? Is it possible to create 3D configurators, walkthrough with OSG Composer?
A2At the first user test of OSG composer, few 3D expert users used the software with no introduction. The first thing they tried to do is to use it to create different configurations, of the models they imported. The presence of the assembly tree, the copy and create instance functionalities, and the 3d visual manipulators makes the process of creating configurations in OSG composer really easy and intuitive.

Because of the current planed usage of OSG Composer, it does not support creating walkthroughs. We promise an enjoyable walkthrough creation and visualization experience in our next products.

  
Q3OSG can handle a large range of 3D formats. Could you please describe the support of collada and PDF3D (Is it possible to import OSG Composer Collada into Photoshop CS4? Is it possible to publish PDF3D without Acrobat?)
A3OSG Composer targets two main user groups, the advanced 3D users, who use OSG Composer as a tool to fill a gap in their 3D pipeline. This gap may be adding support to unsupported file formats, composing a 3D scene, do advanced material breakup for rendering, fixing a problematic model, or even preparing data from different sources to be used in their 3D applications. The other group is CAD users, who are looking for an inexpensive way to view and share CAD files. This group will find OSG composer a high quality tool to do the job, without the high price tag they are used to in the CAD arena.

We wanted to give those groups the right formats to share their models. To share a scene with others the easiest way is to create a PDF3D file. OSG Composer performs all the tasks needed to create a PDF 3D file without the need to have Acrobat installed. The generated file is ready to be viewed using the free acrobat reader. This would not have been possible without the remarkable standard publication of the PDF file format by Adobe.

For using OSG Composer models in other applications, the user has the option to generate a Collada file, which maintains the full assembly structure and makes the geometry ready to be used with other applications supporting Collada.

In case the user wants to render the 3D scene, OSG Composer supports exporting the scene as OBJ file, which can be used by Photoshop CS4 for example in a 3D layer of an image.

  
Q4OSG Composer can apply materials to CAD parts: is this an easy process, how realistic can material be (shaders, reflections, shadow) ?
A4The material assignment in OSG Composer, is as simple as dragging a material from the material tree and dropping it on the geometry you want to paint. But that is not all, for advanced users OSG Composer provides advanced material break up mechanism. For example in the case of different parts using the same material, you can easily make each part or surface of them use its own material. On the other hand when you have different parts using different materials, you can make them use the same material for fast future
material assignment.

In this release OSG Composer supports simple materials with ambient, diffuse, specular colors and textures. We wanted to make sure OSG Composer will run on virtually every computer, to make everybody enjoy the magnificent world of 3D. Future releases will have support for shaders, reflections and shadow.

  
Q5OSG Composer preserve the assembly tree, is it possible to animate assembly too?
A5Assembly tree can be useful in many scenarios, it makes moving parts easy. For example you move the base of an object, and all other parts defined with respect to the base will move with it. It makes picking easy, so you can move from one part to its parent. It allows you to break materials, and in addition to that you have animation. Our Beta already shows how nice it is to have the assembly tree in your disposal, to create stunning animations.
  
Q6 Could you please give us some informations about the next developments of OSG Composer?
A6Based on users feature requests, and workflow comments, the next release of OSG Composer will include advanced material support, better addition and subtraction selection, and more supported import and export file formats. We are also working hard on our other exciting product 4D Composer, which will add great animation, camera control, walkthrough, lights and much, much more.
Bee-OH
Soumit par Administrateur le jeudi, 01/07/2009
 Q&ABee-OH

Franck CRISON, Pôle Réalité Virtuelle et Systèmes Embarqués ESIEA
Juillet 2009
 

 

? Bee-oh est basĂ© sur le moteur 3D OGRE. Trois Ă©tudiants ont travaillĂ© pendant 5 mois sur le projet, en sacrifiant la plus grande partie de leur temps libre pour mener Ă  bien ce projet ambitieux. ?

< "Bee-OH" en action

  
Q1Pouvez-vous nous décrire brièvement le projet Bee-OH ?
A1Le projet Bee-oh est né d?une rencontre entre le Réseau Biodiversité pour les Abeilles et en particulier avec son président Philippe Lecompte et trois étudiants de l?ESIEA - Ecole d?Ingénieur en Informatique, Electronique, Automatique - (Naëm Baron, Yoann Fausther, Aurélien Milliat) qui souhaitaient travailler sur un projet de réalité virtuelle dans le cadre de leur projet de recherche de 4ème année.
Le projet, constitué de trois applications, offre la possibilité de se mettre à la place d?un apiculteur, d?un agriculteur et d?une abeille.
Pour l?application « Abeille », des périphériques spécifiques ont été développés. Se présentant sous la forme d?ailes, ces dispositifs capturent l?amplitude et la vitesse du mouvement des bras de l?utilisateur.
  
Q2Quel est le message Ă©ducatif que vous souhaitez faire passer Ă  travers de serious games ?
A2Le message porte principalement sur le problème de la disparition de l?abeille, acteur indispensable de la pollinisation. Une des sources étant la malnutrition des abeilles, l?utilisateur peut se rendre compte physiquement de la difficulté pour se nourrir lorsque les ressources sont faibles et éloignées de la ruche. Il peut aussi agir sur l?environnement en se mettant à la place de l?agriculteur et en gérant les cultures.
  
Q3Avez-vous pu mesurer la pertinence d'un serious games auprès des utilisateurs ?
A3Le premier retour est venu du grand public lors de la première présentation de l?application pendant le salon Laval Virtual 2009, au mois d?avril dernier. Ces retours, très positifs, ont montré que les utilisateurs étaient très réceptifs aux explications complémentaires après l?utilisation de l?application. La combinaison ludique et pédagogique ainsi que la stimulation multi sensorielle ont attiré de nombreuses personnes.
  
Q4Quel mode de diffusion privilégiez-vous pour diffuser Bee-OH (CD-Rom, Web...) ?
A4Le futur de l?application n?est pas encore complètement établi. Cette première version a montré que ce projet a donné naissance à un outil de sensibilisation très intéressant et ne doit surtout pas rester unique. Des contacts ont été pris avec différentes associations apicoles afin de les associer à notre réflexion.
  
Q5Pouvez-vous nous donner quelques informations quant au développement : moteur 3D utilisé, nombre de développeurs, durée du développement ?
A5Bee-oh est basé sur le moteur 3D OGRE. Trois étudiants ont travaillé pendant 5 mois sur le projet, en sacrifiant la plus grande partie de leur temps libre pour mener à bien ce projet ambitieux.
  
Q6 Quelles sont les principales diffĂ©rences entre un serious games tel que Bee-OH et un jeu vidĂ©o ?
A6Un exemple de différence entre Bee-oh et un jeu vidéo : aucun score n?est à battre.
Dans le cas de l?application « Abeille », l?interface ne comporte aucun texte ni nombre pour ne par perturber l?immersion. L?utilisateur va ressentir rĂ©ellement l?effort Ă  produire et la fatigue physique suivant la distance Ă  parcourir pour aller d?une fleur Ă  la ruche. 
Dans le cas de l?application « Agriculteur », un score est présent dans l?interface, mais c?est uniquement un indicateur de biodiversité qui permet de comprendre l?incidence d?une action sur l?environnement.
Dans le cas de Bee-oh, l?objectif n?est pas non plus de « faire passer du temps » à chaque personne : quelques minutes suffisent pour faire comprendre la problématique et à donner quelques pistes pour la résoudre.
  
Q7Aujourd'hui, le serious games est encore marginal en entreprise ; pensez-vous que son utilisation va se développer dans les prochaines années ?
A7Le serious games nécessite un temps de développement qui est loin d?être négligeable pour une approche industrielle métier. Avec des boites à outils et des ordinateurs de plus en plus puissants, il est possible que ce point évolue rapidement, mais il reste pour l?instant encore beaucoup de chemin à parcourir. Ce type d?outil ne peut donc s?appliquer aujourd?hui qu?à des cas particuliers où le retour sur investissement est possible lorsqu?un public important est touché ou pour des cas de figure ou la sécurité des personnes ou des biens peut être mise en jeu.
  
OGREMAX
Soumit par Administrateur le jeudi, 01/07/2009
 Q&AOGREMAX, 
Derek Nedelman,
July 2009
 

 

My number one goal with OgreMax has been to provide a complete path for creating and moving assets from 3DS Max, Maya, and XSI to an Ogre3D-based application. ?

< OGREMAX for Softimage in action

  
Q1Why have you developed OgreMax, a suite of tools that exports to the Ogre3D graphics engine?
A1I developed OgreMax after I began using Ogre3D and discovered it didn't have a 3DS Max exporter that allowed me to export a scene (meshes, cameras, lights, animations, among other things), process it (by changing the up axis, rescaling units, merging meshes, and so on), load it into an external viewer for inspection, and then load it into my own application. Before creating OgreMax, I tried to get this type of functionality by piecing together an assortment of exporters, viewers, and scene loaders of varying quality, and of course nothing worked very well, if at all.
  
Q2Importing assets is a key component of the 3D pipeline. What are your principal goals?
A2My number one goal with OgreMax has been to provide a complete path for creating and moving assets from 3DS Max, Maya, and XSI to an Ogre3D-based application.
  
Q3OgreMax is available for 3DS Max, but versions are also available for Maya and XSI. Why support all these applications?
A3The first version OgreMax was released over two years ago as a simple 3DS Max-only exporter. Its feature set grew rapidly and after nearly a year of development it became apparent that the original program design needed to be overhauled in order to accomodate more features (the real-time viewports, in particular). Before moving on to this phase, I decided to try out the existing Maya and XSI exporters for Ogre3D. None of them did what I wanted so I decided that since I was going to rewrite the OgreMax exporter for 3DS Max I would also create Maya and XSI versions. Having used their software development kits in the past I knew that it would be possible to implement most of the features OgreMax already had.
  
Q4With regards to porting OgreMax to the different applications, could you please give us your feeling about the 3DS Max, Maya and XSI SDKs?
A4It's easiest to summarize my feelings with a list of pros and cons for each SDK:

3DS Max SDK
+The most comprehensive SDK. It works far better than the others because it allows you access to nearly everything.
+Has a pure C++ interface which means all plugin code can be written in the same language, resulting in easier debugging and a faster run time.
+The simplest scene graph traversal.
+The simplest data storage methods.
+The simplest user interface handling. User interface elements are created within Visual Studio's resource editor, which means there's less guessing about how things will look.
+Contains lots of sample code.
+Everything is compiled into a single plugin file. This makes manually installing OgreMax very easy.
-The programming interface is complex and messy.
-Lots of code needs to be written when creating new object and material types.
-Compile times are very slow compared to the other SDKs.

Maya SDK
+Has a well-defined and clean programming interface.
+New object types require very little code.
+Easy to add custom commands.
+Compile times are the fastest of the three SDKs.
-User interfaces need to be written in separate MEL script files, which results in a slower run time.
-Creating and accessing some data types is difficult and/or strange.
-Traversing the scene graph is tricky.
-Some functionality is not accessible through the C++ API and must instead be accessed by dynamically executing script code.
-Custom viewports are not supported. There's a somewhat similar custom renderer feature that OgreMax makes use of, but it's slow and not very useful.

XSI SDK
+New object types require very little code.
+Easy to add custom commands.
+Fast compile times.

-Custom materials and shaders must be implemented in separate script files, which is error prone since all the code that accesses the material data is written in C++.
-User interfaces need to be written in separate script files, which results in a slower run time.
-Parts of the user interface in the property page cannot be destroyed and recreated dynamically.
-XSI's custom user interface styling causes problems with many of the OgreMax dialogs. The SDK provides a way to toggle the custom styling, but this results in the OgreMax dialogs looking completely different than the XSI dialogs.
-Inconsistent notifications. Some internal notifications only occur if at least one embedded OgreMax scene window is open. This requires workarounds in the code for the case that no such windows are open.
-No way to hook into material editor 'node' menus. This makes creating OgreMax materials a little more time consuming for the user than they need to be.
-Some functionality is not accessible through the C++ API and must instead be accessed by dynamically executing script code.

  
Q5OgreMax exports most of the 3D scene details: objects, animation, bones, lights, cameras. It also has a real-time preview (through the use of custom scene viewports) that allows users to view the scene before exporting. These are very advanced features for a free tool. Which feature of OgreMax are you the most proud of?
A5I'm proud of a number of things:
-I'm proud that there are now three OgreMax exporters that provide what I originally wanted: a path for creating and moving assets from a 3D content creation application to my own application.
-I'm proud that all three exporters offer such an extensive feature set and uniform user interface. The Maya and XSI exporters are identical, and they each have roughly 95% of the features that the 3DS Max version has. This makes it very easy for artists to move from one tool to another if necessary.
-I'm proud of the fact that so many people have adopted OgreMax as their primary toolset.
  
Q6 Ogre3D .scene and .mesh formats specifications change frequently (OgreMax supports Ogre 3D 1.6 & 1.7). Do you think that they are the best formats for a 3D pipeline? What about intermediate formats such as FBX or Collada?
A6Any successful format needs to evolve over time, so the fact that .mesh and .scene are changing is a very good sign. It's shows that the formats are being adapted to people's needs.
The FBX and Collada formats are perfectly acceptable as an interchange format for 3D content creation applications, but they will typically not be used as a final format. For example, if you were developing a game you would most likely write tools to process FBX/Collada files, removing excess data and eventually saving to a format that your game can read. There are a couple of problems with this approach, however:
1)You still need to write tools - Depending on your needs, you may be able to get away with something simple such as reading the FBX/Collada files and then writing out a few lists of data. On the other hand, you may need more functionality, such as being able to join together meshes, or rescale the scene. These types of tools may require a graphical user interface, which can take more time to create. Also, many types of operations seem simple at first but become very complex. For example, rescaling a scene seems simple until you realize that rescaling affects mesh vertices and all their animations, skeletons, cameras, lights, and all their related properties. Creating and maintaining the tools can become a full-time task.
2)Designing file formats is difficult - Again, if your needs are simple then this may not be an issue, but for all but the simplest programs creating and maintaining file formats can be even more difficult than creating the tools that use those formats.
3)Do you really want to create tools? - Why do all that work if you don't have to?
For most situations the Ogre3D formats are the best choice since they have been in use for years, and there are already tools such as OgreMax you can use to generate that type of data.
  
Q7 What is the license of OgreMax? Are you proposing commercial services around OgreMax and Ogre?
A7The OgreMax exporters are free to use for any purpose, with the exception that they not be redistributed or resold. The OgreMax viewers and their source code are free to use for any purpose with the condition that you give credit in your application to OgreMax.
By the middle of August, the exporter source code will be available for licensing. In addition, paid support will be offered.
Q8Ogre3D benefits of a large user community and has a lot of tools for exporting assets. But more generally, do you think that the creation of export tools is the job of the 3D application creator (Autodesk, for example) or the 3D engine maker?
A8The best thing 3D application creators can do is create full featured SDKs for their products that allow 3D engine makers or anyone else to create the type of tools they need.
  
  
OSG Composer
Soumit par Administrateur le jeudi, 01/07/2009
 Q&AOSG Composer

Ashraf Sultan, Simulation Lab Software
July 2009
 

 

OSG Composer targets two main user groups, the advanced 3D users, who use OSG Composer as a tool to fill a gap in their 3D pipeline. The other group is CAD users, who are looking for an inexpensive way to view and share CAD files. This group will find OSG composer a high quality tool to do the job, without the high price tag they are used to in the CAD arena. ?

< OSG Composer (Windows)

  
Q1OpenSceneGraph (OSG) is a 3D engine wellknown in the field of gaming, serious gaming and simulation. Is OSG revelant when it comes to display high detailed CAD data?
A1When Simulation Lab Software started on what was then called SimLab Composer, we discussed the idea, should we use OpenGL directly or go through a high level library. At the beginning, we decided to use OpenSceneGraph as a thin layer to utilize the functionality already implemented in OSG. The more we used OSG, the more useful we found it for creating a CAD viewer, and a 3D scene Composer. It provided us with great functionality including, picking, a framework for creating 3D manipulators, out of the box culling, great GUI library integration, and much more. All this makes OSG great for creating CAD software, and I guess we will see more and more OSG based CAD applications in the future.
  
Q2What can OSG Composer do? Is it possible to create 3D configurators, walkthrough with OSG Composer?
A2At the first user test of OSG composer, few 3D expert users used the software with no introduction. The first thing they tried to do is to use it to create different configurations, of the models they imported. The presence of the assembly tree, the copy and create instance functionalities, and the 3d visual manipulators makes the process of creating configurations in OSG composer really easy and intuitive.

Because of the current planed usage of OSG Composer, it does not support creating walkthroughs. We promise an enjoyable walkthrough creation and visualization experience in our next products.

  
Q3OSG can handle a large range of 3D formats. Could you please describe the support of collada and PDF3D (Is it possible to import OSG Composer Collada into Photoshop CS4? Is it possible to publish PDF3D without Acrobat?)
A3OSG Composer targets two main user groups, the advanced 3D users, who use OSG Composer as a tool to fill a gap in their 3D pipeline. This gap may be adding support to unsupported file formats, composing a 3D scene, do advanced material breakup for rendering, fixing a problematic model, or even preparing data from different sources to be used in their 3D applications. The other group is CAD users, who are looking for an inexpensive way to view and share CAD files. This group will find OSG composer a high quality tool to do the job, without the high price tag they are used to in the CAD arena.

We wanted to give those groups the right formats to share their models. To share a scene with others the easiest way is to create a PDF3D file. OSG Composer performs all the tasks needed to create a PDF 3D file without the need to have Acrobat installed. The generated file is ready to be viewed using the free acrobat reader. This would not have been possible without the remarkable standard publication of the PDF file format by Adobe.

For using OSG Composer models in other applications, the user has the option to generate a Collada file, which maintains the full assembly structure and makes the geometry ready to be used with other applications supporting Collada.

In case the user wants to render the 3D scene, OSG Composer supports exporting the scene as OBJ file, which can be used by Photoshop CS4 for example in a 3D layer of an image.

  
Q4OSG Composer can apply materials to CAD parts: is this an easy process, how realistic can material be (shaders, reflections, shadow) ?
A4The material assignment in OSG Composer, is as simple as dragging a material from the material tree and dropping it on the geometry you want to paint. But that is not all, for advanced users OSG Composer provides advanced material break up mechanism. For example in the case of different parts using the same material, you can easily make each part or surface of them use its own material. On the other hand when you have different parts using different materials, you can make them use the same material for fast future
material assignment.

In this release OSG Composer supports simple materials with ambient, diffuse, specular colors and textures. We wanted to make sure OSG Composer will run on virtually every computer, to make everybody enjoy the magnificent world of 3D. Future releases will have support for shaders, reflections and shadow.

  
Q5OSG Composer preserve the assembly tree, is it possible to animate assembly too?
A5Assembly tree can be useful in many scenarios, it makes moving parts easy. For example you move the base of an object, and all other parts defined with respect to the base will move with it. It makes picking easy, so you can move from one part to its parent. It allows you to break materials, and in addition to that you have animation. Our Beta already shows how nice it is to have the assembly tree in your disposal, to create stunning animations.
  
Q6 Could you please give us some informations about the next developments of OSG Composer?
A6Based on users feature requests, and workflow comments, the next release of OSG Composer will include advanced material support, better addition and subtraction selection, and more supported import and export file formats. We are also working hard on our other exciting product 4D Composer, which will add great animation, camera control, walkthrough, lights and much, much more.
Bee-OH
Soumit par Administrateur le jeudi, 01/07/2009
 Q&ABee-OH

Franck CRISON, Pôle Réalité Virtuelle et Systèmes Embarqués ESIEA
Juillet 2009
 

 

? Bee-oh est basĂ© sur le moteur 3D OGRE. Trois Ă©tudiants ont travaillĂ© pendant 5 mois sur le projet, en sacrifiant la plus grande partie de leur temps libre pour mener Ă  bien ce projet ambitieux. ?

< "Bee-OH" en action

  
Q1Pouvez-vous nous décrire brièvement le projet Bee-OH ?
A1Le projet Bee-oh est né d?une rencontre entre le Réseau Biodiversité pour les Abeilles et en particulier avec son président Philippe Lecompte et trois étudiants de l?ESIEA - Ecole d?Ingénieur en Informatique, Electronique, Automatique - (Naëm Baron, Yoann Fausther, Aurélien Milliat) qui souhaitaient travailler sur un projet de réalité virtuelle dans le cadre de leur projet de recherche de 4ème année.
Le projet, constitué de trois applications, offre la possibilité de se mettre à la place d?un apiculteur, d?un agriculteur et d?une abeille.
Pour l?application « Abeille », des périphériques spécifiques ont été développés. Se présentant sous la forme d?ailes, ces dispositifs capturent l?amplitude et la vitesse du mouvement des bras de l?utilisateur.
  
Q2Quel est le message Ă©ducatif que vous souhaitez faire passer Ă  travers de serious games ?
A2Le message porte principalement sur le problème de la disparition de l?abeille, acteur indispensable de la pollinisation. Une des sources étant la malnutrition des abeilles, l?utilisateur peut se rendre compte physiquement de la difficulté pour se nourrir lorsque les ressources sont faibles et éloignées de la ruche. Il peut aussi agir sur l?environnement en se mettant à la place de l?agriculteur et en gérant les cultures.
  
Q3Avez-vous pu mesurer la pertinence d'un serious games auprès des utilisateurs ?
A3Le premier retour est venu du grand public lors de la première présentation de l?application pendant le salon Laval Virtual 2009, au mois d?avril dernier. Ces retours, très positifs, ont montré que les utilisateurs étaient très réceptifs aux explications complémentaires après l?utilisation de l?application. La combinaison ludique et pédagogique ainsi que la stimulation multi sensorielle ont attiré de nombreuses personnes.
  
Q4Quel mode de diffusion privilégiez-vous pour diffuser Bee-OH (CD-Rom, Web...) ?
A4Le futur de l?application n?est pas encore complètement établi. Cette première version a montré que ce projet a donné naissance à un outil de sensibilisation très intéressant et ne doit surtout pas rester unique. Des contacts ont été pris avec différentes associations apicoles afin de les associer à notre réflexion.
  
Q5Pouvez-vous nous donner quelques informations quant au développement : moteur 3D utilisé, nombre de développeurs, durée du développement ?
A5Bee-oh est basé sur le moteur 3D OGRE. Trois étudiants ont travaillé pendant 5 mois sur le projet, en sacrifiant la plus grande partie de leur temps libre pour mener à bien ce projet ambitieux.
  
Q6 Quelles sont les principales diffĂ©rences entre un serious games tel que Bee-OH et un jeu vidĂ©o ?
A6Un exemple de différence entre Bee-oh et un jeu vidéo : aucun score n?est à battre.
Dans le cas de l?application « Abeille », l?interface ne comporte aucun texte ni nombre pour ne par perturber l?immersion. L?utilisateur va ressentir rĂ©ellement l?effort Ă  produire et la fatigue physique suivant la distance Ă  parcourir pour aller d?une fleur Ă  la ruche. 
Dans le cas de l?application « Agriculteur », un score est présent dans l?interface, mais c?est uniquement un indicateur de biodiversité qui permet de comprendre l?incidence d?une action sur l?environnement.
Dans le cas de Bee-oh, l?objectif n?est pas non plus de « faire passer du temps » à chaque personne : quelques minutes suffisent pour faire comprendre la problématique et à donner quelques pistes pour la résoudre.
  
Q7Aujourd'hui, le serious games est encore marginal en entreprise ; pensez-vous que son utilisation va se développer dans les prochaines années ?
A7Le serious games nécessite un temps de développement qui est loin d?être négligeable pour une approche industrielle métier. Avec des boites à outils et des ordinateurs de plus en plus puissants, il est possible que ce point évolue rapidement, mais il reste pour l?instant encore beaucoup de chemin à parcourir. Ce type d?outil ne peut donc s?appliquer aujourd?hui qu?à des cas particuliers où le retour sur investissement est possible lorsqu?un public important est touché ou pour des cas de figure ou la sécurité des personnes ou des biens peut être mise en jeu.
  
Autodesk Media & Entertainment, Marc Petit
Soumit par Administrateur le mardi, 01/06/2009
 Q&A

Marc Petit, 
Autodesk Media & Entertainment
Juin - 2009

 

?Newport sera accessible Ă des architectes, des designers et des ingĂ©nieurs sans aucune formation speciale. C?est un peu un Keynote d'Apple pour la 3D! Il s?intègre facilement dans un environnement web et se connecte Ă  toute sorte de pĂ©riphĂ©riques?

< Affichage temps réel avec 3ds max 2010

  
Q1Autodesk est le leader incontesté dans le domaine DCC. Comment maintenir dans ces conditions un terreau fertile pour l'innovation ? Existe-t-il suffisamment d'émulation ?
A1

Plus que jamais ! Que ce soit l?industrie du film ou du jeu, nous assistons à des changements importants des méthodes de production, pour intégrer la stéréoscopie par exemple. De plus en plus, les coûts de productions sont devenus le sujet majeur dans tous les segments de l?industrie, en particulier dans le jeu. Nos clients ont de nombreuses alternatives, que ce soit des produits commerciaux ou leur capacité de développement interne. Nos produits DCC sont très matures, l?innovation se concentre avant tout sur l?efficacité et la productivité de la chaine de production. Nous travaillons en collaboration étroite avec nos clients pour leur apporter des solutions à la réduction de couts et l?intégration des nouvelles techniques de production et pour les motiver adopter nos dernières versions, ce qui est capital pour nous!

  
Q23ds max 2010 apporte un très grand nombre de nouveautés. Pouvez-vous nous parler de la fonction qui à vos yeux est la plus marquante?
A2

La chose la plus marquante reste le nombre des nouvelles fonctionnalités, une autre façon de répondre à la question précédente ;-) Plus sérieusement, les performances du module de rendu interactif sont assez époustouflantes tout en garantissant un haut niveau de compatibilité avec mental ray, cela devient vraiment intéressant.

  
Q3Certains parlent de 3ds max comme étant un "plugin central". La légendaire ouverture du logiciel ne le rend-il pas plus "lourd" et moins rapide que ces concurrents ?
A3

3dsmax est reconnu pour ses performances avec des modèles extrêmement complexes, l?ouverture du logiciel ainsi que la disponibilité de nombreuses applications de tiers-partie en font la plate-forme de référence de l?industrie. 3dsmax est une plateforme fiable, versatile et très ouverte, nous sommes toujours très surpris de découvrir comment nos clients l?utilisent. Nos clients industriels s?en serve de plus en plus comme une solution de visualisation mais aussi de simulation.

  
Q4Depuis de nombreuses années on évoque une convergence entre pré-calculé et temps réel. Sommes-nous encore loin aujourd'hui de cet objectif ? Des initiatives comme mental mill peuvent-elles tracer des ponts entre ces deux mondes ?
A4

Tout à fait, cette convergence se matérialise année après année, le module de rendu interactif de 3dsmax 2010 en est une preuve, il utilise effectivement la technologie MetaSL et mental mill de mental images. Cette technologie nous permet d?avoir des fonctions avancées dans le module interactif (motion blur, profondeur de champ, tone mapping) tout en garantissant un grand niveau de compatibilité visuelle avec le rendu software.

 

 

 

 

rendu temps réel 3ds max 2010 >

  
Q5Depuis le rachat de Softimage par Autodesk, l'avenir du format Collada semble s'être assombrit au profit du FBX. Partagez-vous ces inquiétudes ?
A5Pas vraiment, FBX et Collada ont chacun des succès dans des secteurs différents du marché. Nous supportons Collada dans tous nos produits. Investir dans FBX nous permet de livrer une interopérabilité avancée entre nos produits (entre Revit et 3dsmax par exemple) et d?ouvrir nos environnements de productions à nos partenaires et aussi a nos concurrents pour le bénéfice de nos clients qui désirent des environnements ouverts.
  
Q6Souhaitez-vous introduire des domaines d'utilisation pour 3ds max, Maya et Softimage afin d'orienter le choix des utilisateurs ou bien souhaitez-vous laisser la liberté de choix ?
A6Nous pensons qu?il est important de laisser la liberté de choix à nos clients, en particulier à cause de leurs investissements en termes de formation du personnel, de développements spécifiques et de données existantes. Cela nous conduit à dupliquer certaines fonctions mais le message que nous recevons de nos clients à ce sujet est très clair !
  
Q7Aujourd'hui pensez-vous que le FBX réponde à tous les besoins d'échanges de données 3D à travers les logiciels DCC d'Autodesk ?
A7Je pense qu?on est encore assez loin des niveaux d?interopérabilités requis par nos usagers. Nous continuons à investir de façon importante dans FBX et il reste pas de travail !
  
Q8Il est bien sûr trop tôt pour tirer un bilan de l'acquisition de Softimage. Pouvez-vous nous dire cependant comment c'est déroulé l'intégration de la nouvelle équipe et quelles sont les nouvelles perspectives pour Autodesk et les utilisateurs de Softimage ?
A8L?intégration fut très facile, j?ai retrouvé avec plaisir mes anciens collègues de Softimage et nous nous sommes mis au travail rapidement. Bien sûr, nous travaillons sur les prochaines versions de XSI (maintenant appelé Softimage) mais nous avons aussi intégrés les équipes de Softimage avec nos équipes de middleware, ensemble, ils travaillent à mettre en place des solutions pour créer et intégrer facilement des personnages interactifs crédibles y compris au niveau émotionnel en intégrant nos technologies d?animation et d?intelligence artificielle.
  
Q9Parmi les nombreuses expérimentations du Lab d'Autodesk, nous avons retenu "Newport" et "Dragonfly". Pouvez-vous nous décrire brièvement le projet "Newport" et en quoi cette solution 3d temps réel va-t-elle aider les utilisateurs des logiciels Autodesk à créer des visites virtuelles ? Avec "Newport", n'avez-vous pas peur de faire de l'ombre aux éditeurs de moteurs 3D temps réel tels que Unity3D, Quest3D, Virtools, Nova (...) ?
A9Newport est le dernier-nĂ© de la division Media & Entertainment et c?est un projet qui me tient particulièrement Ă  c?ur ! Il permet de faire des prĂ©sentations interactives de produits, d?automobiles, des visites guidĂ©es de bâtiments ou de villes numĂ©riques. Contrairement aux autres solutions interactives 3d que vous mentionnez, Newport ne requiert aucune connaissance de programmation ou de logique pour crĂ©er du contenu interactif. Tout l?aspect narratif et Ă©ditorial se fait a travers des assistants très simples et très puissants. Manipuler une camera dans un espace 3d quand on est un novice pour faire une prise de vue de qualitĂ© est extrĂŞmement compliquĂ© ! Un mauvais mouvement de camera peut complètement dĂ©sorienter le spectateur et lui faire perdre le fil de l?histoire ou de la visite ! Nous rendons le processus de prise de vue virtuel complètement trivial et les sĂ©quences produites sont souvent très belles. Newport sera accessible a des architectes, des designers et des ingĂ©nieurs sans aucune formation speciale. C?est un peu un Keynote d'Apple pour la 3D ! Il s?intègre facilement dans un environnement web et se connecte Ă  toute sorte de pĂ©riphĂ©riques, comme montrĂ© ici.
  
Q10  "Dragonfly" montre comment l'avenir des applications 3D web 2.0. Pensez-vous que la 3D sur le web doit passer par Flash ?
A10 Pas forcĂ©ment, Flash est une plateforme interactive intĂ©ressante mais il y en a d?autres. Un projet comme Dragonfly dĂ©montre l?intĂ©rĂŞt de solutions essentiellement orientĂ©es vers le serveur.
  
Q11A propos de "Newport" et "DragonFly" : ces technologies expérimentales sont elle destinées à être commercialisées ?
A11Ce sont deux exemples de technologies très innovantes, elles seront commercialisées à terme sous une forme ou sous une autre?
Collada 1.5
Soumit par Administrateur le mardi, 01/09/2008
Q&A

RĂ©mi Arnaud
Intel
, Collada 1.5
Sept. 2008

 

?Si on regarde le nombre d'objets disponibles aujourd'hui sur internet, ou le nombre d'applications qui proposent COLLADA en standard, il est clair que COLLADA est le format le plus répandu. L'effet boule de neige aidant, c'est un phénomène qui va en s'accélérant en particulier avec l'aide de 3dwarehouse, Daz3D, Poser, et maintenant Turbosquid qui proposent des modèles au format COLLADA.?

  
Q1Après ces quelques années de disponibilités, considérez vous que Collada est un succès ?
A1

COLLADA sera un succès le jour ou les données seront libre de circuler entre toutes les applications, de façon transparente et fidèle. Ce jour là les outils de création, le web et les applications sur toutes les plateformes fonctionneront magiquement, sans avoir à faire le moindre export/import, et seul les experts techniques seront que COLLADA existe :-)
Il y a encore beaucoup de travail, mais je dois dire que les progrès dont nous avons pu être témoin pendant le récent PLugFest et pendant le dernier SigGraph sont spectaculaires et vont dans le bon sens.

  
Q2COLLADA semble réussir là où d'autres formats ont échoués : peut-on dire aujourd'hui qu'il s'agit du format 3D le plus standard et le plus
utilisé ?
A2

Je ne suis pas sûr. Si on demande aux utilisateurs, le format le plus utilise est obj, ce qui est assez consternant vu que ce format ne supporte aucune fonctionnalités avancées tel que animation, physique, shader...
Mais si on regarde le nombre d'objets disponibles aujourd'hui sur internet, ou le nombre d'applications qui proposent COLLADA en standard, il est clair que COLLADA est le format le plus répandu. L'effet boule de neige aidant, c'est un phénomène qui va en s'accélérant en particulier avec l'aide de 3dwarehouse, Daz3D, Poser, et maintenant Turbosquid qui proposent des modèles au format COLLADA.

  
Q3Pourquoi Collada 1.5 s'ouvre-t-il au monde industriels ? N'y avait-il pas suffisamment de formats 3D dans le secteur CAD (JT Open, 3DXML, U3D...) ?
A3COLLADA est un standard ouvert à tous, pour contribuer a COLLADA il suffit de devenir membre de Khronos ce qui est ouvert à toute société, université ou individuel qui accepte de partager son portfolio de propriété intellectuelle avec les autres membres. La question est donc étrange puisque COLLADA a toujours été ouvert aux contributeurs de tous les horizons. Je ne serais pas surpris d'avoir bientôt de l'aide provenant d'autres secteurs tels que l'industrie du cinéma, l'architecture, et tout autre industrie qui réalise que les données ont trop de valeur pour être confinées dans des formats propriétaires ou/et qui ne sont pas royaltie-free.
  
Q4Pourquoi les industriels s'intéresseraient-ils au COLLADA 1.5 ? Que peuvent-il faire avec ?
A4

De plus en plus d'industriels sont intéressé par la technologie provenant du jeu vidéo, qui permet la visualisation et manipulation de données 3D de haute qualité et temps réel, y compris les animations et la physique, dans un budget matériel très petit. De nombreux industriels se tapent la tête contre les murs en voyant que, du a la complexité d'utilisation et a un coup prohibitif, seul un faible pourcentage de leur employées ont accès aux données 3D qui sont le c?ur de leur entreprise. Dans le même temps tous ont accès a des technologies 3D avancées chez eux lorsqu'ils jouent a des jeux vidéos avec leur enfants. Il est d'extrême importance dans un monde compétitif de pouvoir exploiter la technologie des jeux vidéos au sein de l'entreprise, et COLLADA est de première importance car il permet la circulation des données entre les outils professionnels et des applications basées sur la technologie des jeux vidéos.

  
Q5Google semble vouloir accroître le rayonnement de COLLADA pour le web. Le format .zae est-suffisamment compressé pour le web ? Les moteurs 3D en Flash pourront-il également profiter du Collada compressé en .zae (PaperVision, Away3D...) ?
A5COLLADA a été crée principalement pour résoudre le problème de transfert de données entre outils et applications en cours de création, mais il n'a pas été créé a la base comme format qui serait directement utilisé par les application finales. Dans le monde du jeux vidéo il est rare de fournir les données brutes a l'utilisateur final, mais plutôt de fournir des données dans un format binaire, protégé, compressé, encrypté ou bien tout simplement très proche du hardware final. Mais les utilisateurs de COLLADA en on voulu autrement et on décidé l'utilisation de COLLADA comme format direct pour de nombreuses applications, en particulier tout ce qui est proche de l'internet pour qui XML est le standard par default pour les données. Il reste que les données COLLADA sont plus compliqués car un objet 3D peut utiliser des données multimédia plus basique (images, vidéo, audio).
Le problème lié à l'utilisation de COLLADA comme format de distribution est le packaging. En effet, il est important de pouvoir transmettre un paquet complet contenant la scène et les objets 3D, mais aussi toutes les images (textures) et autres données. Zip est non seulement excellent pour faire des paquets, mais aussi il permet la compression des données sans perte ce qui aide la conservation de la bande passante et de disque durs. (zae est donc vert :-) )
Pour répondre a la question plus directement, en général il y a bien plus de données dans les images que dans les la géométrie, donc COLLADA n'a qu'un impact faible en ce qui concerne la taille des données a transmettre. Mais la bonne nouvelle est que toutes les expériences montrent que un ficher COLLADA compresse par zip est plus petit que le fichier équivalent binaire compressé par zip. En d'autre termes il est très difficile de faire mieux que XML(COLLADA)+ZIP.
  
Q6En voulant Ă©tendre le champs d'application de COLLADA, n'avez vous pas peur de ne plus satisfaire les utilisateurs historiques qui utilisent Collada dans leurs "workflow" 3D ?
A6Il n'y a pas de risque, COLLADA ne cherche pas a devenir un format universel pour tous les usages, mais bien a ce concentrer sur le domaine de l'industrie du jeux vidéo. Ce qui change ce n'est pas COLLADA, mais le monde industriel qui a finalement pris note que les jeux vidéos ne sont pas un sous-produit sans grand intérêt, mais une source de technologie qui est nécessaire de comprendre et d'utiliser. C'est une chance énorme pour COLLADA et les développeurs de jeux de pouvoir avoir l'aide de professionnels dans la définition de structures nécessaire a la cinématique inverse par exemple. Sans compter que des millions d'objets 3D existent déjà dans les disques dur de ces industriels, et que pouvoir directement utiliser ces données dans un futur jeu devrait permettre la réduction des couts et de meilleurs relations avec ses sponsors industriels.
  
Q7A propos de Collada 1.5, que se passe-t-il au niveau de la compatibilité avec la version précédente 1.4 ?
A7COLLADA 1.5 n'est pas entièrement compatible avec 1.4. Ce ne sont que des différences mineures, mais il est très important de ne pas s'interdire des modifications qui permettent de faire évoluer le standard, en particulier l'amélioration de la gestion des textures, qui permet maintenant de pointer directement dans les différent niveaux de mip-map, ce qui n'était pas possible avec 1.4.
Cela ne semble poser de problème à personne car la transformation de 1.4 a 1.5 est très facile, les outils vont charger indifféremment 1.4 ou 1.5, et proposer la sauvegarde en 1.4 ou 1.5 pendant autant de temps qu'il sera nécessaire. Les deux versions coexistent parfaitement.
  
Q8Au début, COLLADA a souffert de différences d'implantations entre les différents logicians (par exemple 3ds max / XSI proposaient de
grandes difficultés de partage de données 3D avec Collada). Comment peut-on réduire les écarts entre ces versions pour accroître la
compatibilité et la productivité ?
A8Très bonne question, je me permet de reprendre la balle au vol et de m'assurer que tout le monde est au courant que l'implĂ©mentation fournie par Autodesk n'est pas top, et qu'il faut utiliser les plug-in open-source utilises par les dĂ©veloppeurs de jeux pour 3dsMax, Maya et Montionbuilder. Et pour les plus tĂ©mĂ©raires, il y a maintenant la version 'next gen' du plug-in qui fonctionne beaucoup plus vite, et n'utilise pratiquement pas de mĂ©moire.
Il y a deux solutions que nous mettons en place:
1 - le test de conformance officiel Khronos qui va permettre aux différentes sociétés qui proposent COLLADA de pouvoir obtenir une validation officielle de Khronos de qualité
2 - COLLADA PlugFest. Les PlugFests sont ouvert a tous et gratuits. Des développeurs de tout horizons s'enferment pendant 2 jours et passent leur temps a échanger des données au format COLLADA, avec l'aide d'experts. C'est vraiment la meilleure façon de faire progresser la qualité d'implémentation. Les PlugFest sont de grand succès et tout le monde en redemande ! Le mieux est d'entrer en contact avec Rita Turkowski notre vénérée COLLADA/Khronos Marketing Manager pour connaitre le lieu/date du prochain PlugFest.
  
Q9Malgré l'arrivée de Collada certains moteurs de jeu continuent à avoir leurs propres exporteurs DCC (Ogre3D, Unreal, GameBryo...) ;
pensez-vous qu'à terme ils vont supporter le Collada afin de réduire leurs développements spécifiques ? Collada parvient-il à remplacer ces exporteurs (en terme d'export de géométrie, matériaux, animations...) ?
A9C'est progressivement en train de changer. Le problème est pour beaucoup de développeur de middleware la valeur ajoutée est dans la création d'une chaine d'outils qui fonctionne, et qu'il est souvent difficile d'arriver a maitriser l'utilisation de standards dans une partie d'un produit qui est primordial. Cela prend du temps, historiquement si on regarde les autres standards qui ont un impact dans le domaine 3D, il y a toujours un passage ou des sociétés tentent d'imposer un format/API propriétaire pour garder leur clients prisonniers, ou tout simplement pour avoir plus de control, mais avec le temps la demande des clients est la plus forte. Unreal et Ogre3D ont déjà un embryon de support pour COLLADA, GameBryo a des clients qui ont fait le travail pour eux, ce qui devrait être annonce prochainement et devrait permettre a tous les clients de GameBryo d'utiliser COLLADA directement.
  
Q10Le Khronos Group et de Web3D Consortium se sont rapprochés depuis plus d'un an. Quels vont être les fruits de se rapprochement ? Est-ce que le X3D a encore une raison d'être ?
A10X3D a un but différent de COLLADA, standardiser la visualisation de 3D sur internet. Et tout le monde en conviendra il y a encore beaucoup de travail a accomplir de ce coté, donc X3D a encore du pain sur la planche!. COLLADA est venu en aide a X3D en permettant a la communauté X3D de profiter de tous les outils qui exporte au format COLLADA d'être directement utilisable par les outils X3D. De nombreuses compagnies du monde X3D (BitManagenent, Yumetek, Pinecoast) supportent maintenant COLLADA directement en entrée des outils X3D.
Les efforts de collaborations sont maintenant sur la simplification des conversions nécessaires entre COLLADA et X3D pour les futures révisions de COLLADA et de X3D, en particulier pour les données CAD qui peuvent être visualisée / manipulée plus directement par des application X3D, alors que la plupart des applications qui utilisent directement COLLADA transforme les données CAD en triangle pour le rendu temps réel uniquement. (voir http://www.shapeways.com/ pour un exemple solide!)
Rendition
Soumit par Administrateur le lundi, 01/06/2008
 Q&AMatt Taylor
CTO Holomatix Ltd, 

June 2008
 

 

? One of our principles is that Rendition's "finished product" is always the same image you would get from a production render, whether you're running an interactive session, or rendering a sequence offline. For example, one of our customers has Rendition running on an 100 core render farm, and they get great performance from it. (They tell us it's like having a 500 core render farm for one fifth of the cost) ?

< Rendition in action

  
Q1Please give a brief description of Rendition.
A1Rendition is a raytracing renderer, which is compatible with mental ray files from Maya, 3ds Max and XSI. One of it's main advantages is that it raytraces in realtime, progressively refining an image, allowing an artist to get an instant view of how their final render will look. This eliminates loads of waiting around, and is great for tweaking materials, moving lights etc. In addition, the progressive image will converge to a final production quality image.
Rendition comes with plug-ins for Maya and Max, and an XSI plug-in is planned. The Maya plug-in is the most advanced, allowing edits being made to the scene (lighting, materials, geometry, in fact almost everything) to update in the rendered view immediately, in real time. The Max plug-in is less advanced at present, but still allows a scene to be rendered in Rendition at the click of a button. For the time being in XSI, you need to export an .mi file and just drag it into Rendition's main window, and you're away.
  
Q2What kind of speed gain can Rendition achieve compared to mental ray?
A2This varies, but we've seen lots of real world scenes that complete 5 or 6 times faster than they do in mental ray. However this is only half the story, because in addition to that speedup factor, Rendition renders progressively, so if you're just looking to check how something looks in your scene, you might get your answer in 1/100th the time it would take with mental ray.
  
Q33d artist who want to take advantage of Rendition should have a very fast CPU or an high-end graphic card?
A3Fast CPU, definitely. We don't use graphics hardware at the moment (though that may change). Get Rendition fired up on a quad core, and the realtime control you can get really is amazing!
  
Q4We have made some test with .mi files generated by Softimage XSI 6 and get very fast results with Rendition ; but what are the advantage of using Rendition compared to Softimage render region?
A4I think it will depend on the scene. If your scene is quick to render anyway, there probably won't be much difference, due to the overhead of exporting the .mi file and dragging it into Rendition etc. However if it's taking a while to render even your sub region, then it's definitely worth using Rendition. We've put loads of effort into getting an image up on screen as quickly as possible, even with "hard" scenes. Of course, if it wasn't for Rendition's lacking an XSI plug-in, it would be a no-brainer, and that's definitely something we'll put right in the future. By the way, Rendition also supports an equivalent of render regions - just hold down shift and drag the mouse over part of the image in Rendition, and it will get prioritized in the rendering, and once completed, the rest of the scene will continue to render to production quality.
  
Q5Could you please explain how Rendition can help 3d artist enhance their work? What kind of new workflow do Rendition propose?
A5First and foremost, there's the instant feedback. When you can immediately see the effect of a change, it allows you much finer control. It saves you loads of time, but more than that, it gives you a new level of finesse when you're tweaking things.
Then, of course there are the output channels. These can be a great time saver - Rendition can make you depth maps, normal maps, a shader or object id channel, and split your render into specular and diffuse passes, all automatically. That's actually been a really popular addition with artists we've talked to - it's more time saved from repetitive tasks, really.
Finally, there's the fact that if you are rendering out production images, the artist may be doing this on their own machine, or on a company render farm. In the first case, Rendition provides big savings in the time to complete a render, so you get your machine back sooner. For render farm use, Rendtion integrates well with existing render farms. It exists for Windows, Linux and Mac and in 32 bit and 64 bit versions, it can be run from a single network location on all machines, which makes installation trivial, and is easy to use with standard render farm management tools (plug-ins for Smedge and Deadline already exist).
  
Q6Can Rendition be used for production images?
A6Absolutely (see previous answer). One of our principles is that Rendition's "finished product" is always the same image you would get from a production render, whether you're running an interactive session, or rendering a sequence offline. For example, one of our customers has Rendition running on an 100 core render farm, and they get great performance from it. (They tell us it's like having a 500 core render farm for one fifth of the cost).
  
Q7Rendition brings almost "realtime raytracing" ; do you think that this technology can be used for tomorrow interactive 3d : games, 3d walthrough?
A7I think raytracing will definitely become the standard technique for realtime applications like games in a few years. For example, Intel's Larrabee, from what I've read, will be a many core x86 CPU with loads of GPU type vector extensions, and that's just a raytracer's dream! Raytracing is an expensive technique, but once it can be done quickly enough, rasterisation and its endless hacks don't really stand a chance.
As far as 3D walkthroughs go, right now Rendition can do them. It's just a question of how much it can refine the image at an interactive framerate. We have plans to co-opt multiple networked machines into generating realtime views of a scene - it's very scalable process, and I think the results will be really exciting.
  
Q83D artist can already try for free the beta version of Rendition. What do they say?
A8I think people are really excited about it. They love the realtime feedback. One artist, for example, had to position his camera view so that a reflected highlight was just so, in a certain place. Now that's a textbook example of where Rendition turns a laborious process of trial and error, into a task that just takes moments - so he was very happy. I'm also grateful that 3D artists have shown a real willingness to work with us on improving Rendition's performance. You need that really, because people need to trust their renderer, and making sure it always produces the right results takes a lot of testing. When we first released a version to the public, we were amazed by all the different things people were trying it with.
Seac 02 LinceoVR
Soumit par Administrateur le lundi, 01/06/2008
 Q&ASeac02
LinceoVR, EasyOn, Display Designer
June. 2008
 

 

"The process is a drag and drop activity, material can be easily modified with few clicks, the user can ad his own new materials and OpenEXR or HDRI images. The traning time is about 1 day."

<Seac02 EasyOn

  
Q1Please give a brief description of seac02 and its activities.
A1Seac02 is a company involved in the CAV (computer aided visualization) sector. We develop and sell virtual and augmented reality software based on OpenGL standards. Target market are retail market (company who produces goods where the aesthetic component is relevant eg. Packages, furniture, fashion and so on), designers and architects.
Our softwares guarantee hi visualization quality, easy interface, simplified functions to be used also by not technical users (eg. Marketing managers or merchandiser who don?t know anything about 3D). We supply a trasversal chain inside the company from the design review to the selling activities.
  
Q2LinceoVR is dedicated to Design Review. What kind of 3d formats can it read?
A2It can read obj, step, wrl, 3ds, and iges with some limitations, but using the rhino plugin we extend the compatibility to all the 3d format supported by McNell.
  
Q3Is it easy to customize an object with LinceoVR?
A3Absolutely yes, the process is a drag and drop activity, material can be easily modified with few clicks, the user can ad his own new materials and OpenEXR or HDRI images. The traning time is about 1 day.
  
Q4Pre-defined materials are very realistic. But how can user can defined their own materials? What are the benefits of using HDR images as reflection ?
A4The simplest way is to modify standard materials and add them with one click to the custom materials library, the second way is to write new shaders, but requires few programming skills. In few month we will add some standards materials to the library to satisfy the needs our users are asking for.
  
Q5Is it possible to animate the 3D objects? How can 3D scene can be published?
A5Yes it is possible with simple animation, but from September we will be able to import complex animations and bones from 3dmax and Maya.
  
Q6 What is the role of EasyOn?
A6EasyOn is the next step, after the designer has defined the style of new product, he can see the product realtime using a camera in the real environment, to compare with competitors one, or tho show directly in the room of the final user (eg. Furniture at home). Merchandiser use the software to sell display directly in the shops.
  
Q7How can augmented reality can help companies to benchmark their products? Is it an easy process?
A7The process is dramatically easy, the user ha sto drag and dor the 3d model inside EasyOn, print a peace of paper, connect the camera shoot realtime the scene, and he starts seeing the virtual object in the real world.
  
Q8Does Linceo, EasyOn and Display Designer share the sames 3d objects?
A8Yes, they share the same objcets, Display designer is the end of the cahin, it can handle over 200M polygons realtime, it has a customizable product library, a customizable display library, and th3e user can define realtime with a drag and drop interface a completely layout of a stor
mental mill
Soumit par Administrateur le samedi, 01/02/2008
 Q&ALudwig von Reiche
Executive Vice President of mental images
feb. 2008
 

" [...] The motivating factor behind the deal with NVIDIA is that the two companies are fairly aligned at this time. The combination of mental images and NVIDIA unite some of the greatest talents in the visual computing industry and this combination enables the development of tools and technologies to advance the state of visualization."


< mental mill user interface

  
Q1Please give a brief presentation of mental mill.
A1

mental mill enables artists and other professionals to develop, test and maintain shaders and complex shader graphs for hardware and software rendering through an intuitive graphical user interface with real-time visual feedback - without the need for programming skills. Software companies can incorporate parts or all of mental mill in the form of software libraries into their own digital content creation and design products. Shaders are automatically generated in the MetaSL? language and can be modified easily. mental images designed MetaSL to encompass the expressive power of all current and future shading languages and shading language standards. Complex cooperating shader graphs can be encapsulated into Phenomena?.
MetaSL shaders and Phenomena are valuable and future-proof assets. They do not need to be re-authored for different target platforms. The built-in proprietary mental mill compiler technology generates abstract syntax tree representations of shaders and Phenomena. These are then translated by back-end plug-in modules into various dedicated or general purpose target languages for compilation to one or more target platforms with the respective native compilers, including CPUs (C++), GPUs (Cg, GLSL, HLSL) and other current and future platforms, eliminating the need to re-engineer and debug shaders and Phenomena for each of them. Whenever possible, mental mill produces real-time interactive visual feedback using real-time compilation of the resulting code and its immediate execution on the target platform.
mental mill ships with support for Cg, HLSL, and GLSL, as well as C++ for mental ray and RealityServer. Back-end plug-ins for other targets such as special purpose processors and other software renderers can be developed by third parties using the mental mill API.

What is mental mill?
mental mill? is an innovative new approach to shader creation which allows users to concentrate their efforts on realizing imagined images. It supports:
- Creation of shaders without programming
- Graphical shader debugging and optimization
- Repurposing and reuse of shaders 
mental mill is a component software or application that is made to integrate or work with 3D software. At its core is the MetaSL® shader language ? a simple yet, expressive language that is a superset of all shader languages. This enables shader creation via a graphical user interface with real-time feedback, and creates shaders, Metanodes?, and Phenomena? that are platform/environment agnostic and are future-proof.
Artists and programmers who write shaders for video games, visual effects, feature animation, design visualization, etc., face many more challenges than the creative aspects of producing them. Shading languages tend to be technical, and the resulting shader code difficult to analyze. Moreover, shaders typically depend on a particular shading language and platform, which means that work cannot be used in
other contexts. These factors not only limit productivity, but also the settings in which the results can be used. mental mill is the solution to these problems.
The foundation of mental mill is the MetaSLTM shading language, a simple yet expressive language which acts as a hub of shader generation. The MetaSL compiler that is part of mental mill has a plug-in front-end for parsers of other languages and a plug-in back-end to support any target platform. The extensibility of the compiler means that shaders are protected against other languages becoming obsolete and also that shaders can be reused in a variety of settings.
mental mill not only allows users to write shaders which transfer easily, but also encourages the creation of compact components called MetanodesTM , which can be combined into shader networks and PhenomenaTM to create more complicated, visually interesting shaders. The mental mill Graphical User Interface (GUI) provides a visual representation of these shader networks for easy and intuitive manipulation. In fact, the GUI eliminates the need to work directly with code. Shader writers can use it to create, debug and optimize shaders.

  
Q2Is mental mill aimed at developers or 3D artists?
A2

A key benefit of mental mill is that it puts more control in the hands of the artists/individuals of all skill levels ? therefore it is squarely aimed at developers and 3D artists, as well as other creative individuals in need of shaders.
In the case of a non-technical artist or other game developers in non-engineering roles, mental mill empowers them to create hardware shaders via a graphical interface without coding expertise. The real time feedback lets them see an immediate cause and effect, so they can experiment and learn as they go.
For technical artists, mental mill enables them to create complex shader trees ? MetaNodes? and Phenomena? - that can be passed along and adhered to less technical individual or teams of artists. They determine/limit the amount of control given to less technical creative staff.
One significant benefit for artists of all levels is that they no longer have to try to communicate their vision to a programmer to interpret, nor do they need to wait for the shaders to be created by coders ? thus, avoiding bottlenecks, which benefits the entire development process.
For developers, as well as all users of mental mill, there are a number of benefits such as real-time, visual debugging of shaders. Since the resulting shaders are platform/environment agnostic, mental mill also negates the need to write or rewrite multiple shaders for each configuration. This makes the entire process more efficient.

  
Q3Creating a pixel or vertex shader requires some mathematics knowledge. Do you think that artists are ready for this kind of work?
A3One of the pain points in the development cycle is the bottleneck inherent by one or two shader programmers trying to keep up with what is usually an entire team of artists. And one of the frustrations for artists is trying to communicate their vision to a shader programmer who then has to interpret it and build shaders to achieve it. We believe that artists and other creative team members are both ready and willing to embrace something that would streamline this process.
  
Q4Can mental mill be integrated in any real-time 3D pipeline (i.e. games or serious games)?
A4mental mill is a shader writer software, versus a rendering software. It is platform and software agnostic, therefore it can be used in conjunction with mental ray or any other 3D rendering, DCC or CAD application.
Since mental mill creates shaders in several target languages ? CgFX, HLSL, GLSL, C++ for mental ray® or RealityServer?; and the resulting shaders are platform independent and environment agnostic ? whether that is Linux, XP, Vista; CPU or GPU environment ? we believe it can be integrated into any type of development cycle or set up. Back-end plug-ins for other targets such as special purpose processors and other software renderers can be developed by third parties using the mental mill API.
  
Q5Is mental mill free for 3D artists?
A5We provide mental mill®: Artist Edition for free bundled with NVIDIA® FX Composer 2.0. This version of mental mill does not include MetaSL edited or debugging capabilities and exclusively exports to FX Composer 2.0.
In January we issued the mental mill® 1.0 Beta Release as a download on the mental images website. This full-featured edition of the software includes powerful shader editing tools such as MetaSL shader editing and an integrated graphical shader debugger. It includes a compiler that enables users to target shader generation for specific applications.
  
Q6Does it make sense to integrate mental mill directly in DCC packages?
A6A majority of mental images? business is based on the OEM or partners model. We are always seeking ways to compliment our partners? software and hardware and do not anticipate that to change. For example, our mental ray® technology and related products have long been integrated with the leading products of our partners such as Autodesk and Avid, as it enables optimization efficiencies and a more seamless or streamlined experience for the end users.
We have already been working with our partners and have developed mental mill® Integrator Edition, which is a component library for integration in to applications and shader pipelines.
  
Q7The facts that NVIDIA acquired mental images and that mental mill features a render tree close to Softimage XSI's can be interpreted as a sign that real-time and pre-rendering are merging. What is your opinion about this?
A7The consumer demand for richer, more photorealistic and interactive, immersive 3D experiences ? whether those experiences are games, films, or web-based ? have been driving innovation in visual computing at a rapid pace. That demand is no longer limited to high-end gamers and creative professionals, but is growing more mainstream with the adoption of visually rich or configurable ecommerce experiences, virtual and social networks, and immersive and digital media-centric operating systems. We expect this to fuel a great number of advancements in visual computing for the foreseeable future and see mental images? technologies at the forefront of this trend.
The motivating factor behind the deal with NVIDIA is that the two companies are fairly aligned at this time. The combination of mental images and NVIDIA unite some of the greatest talents in the visual computing industry and this combination enables the development of tools and technologies to advance the state of visualization.
  
Orealia Designer
Soumit par Administrateur le mercredi, 01/01/2008
 Q&AOREALIA Designer 
David Biau, Onesia
jan. 2008
>version anglaise
 

 

"A partir de l?import des objets CAO, la solution Orealia offre plusieurs outils performants pour la création et l?exploitation des matériaux. Il est ainsi possible de choisir parmi plusieurs matériaux de base aux spécificités différentes (brillant, mate, HDR?) et d?en personnaliser l?apparence par plusieurs paramètres intuitifs et simples d?utilisation."

<Orealia Designer

  
Q1Brève présentation d'Onesia et de ses activités ?
A1FondĂ©e en 2004, Onesia est une sociĂ©tĂ© qui permet aux entreprises de concevoir, prĂ©senter et vendre leurs produits plus vite et avec une meilleure efficacitĂ©, par l?usage de prototype virtuel.
Pour cela, Onesia développe en partenariats avec les laboratoires de recherche CNRS et IRIT, une solution logicielle (Orealia) de conception et de visualisation 3D temps réel photo-réaliste innovante qui s?adresse aux métiers pour lesquels la qualité visuelle est primordiale: design industriel, communication/marketing, support à la vente, architecture?
  
Q2A qui s'adresse la gamme Orealia ? Doit-on ĂŞtre un professionnel de la 3D pour utiliser Orealia Designer ?
A2La solution Orealia est un saut technologique qui permet d?obtenir une très grande qualité d?image pour la visualisation réaliste temps réel de modèles en 3D. Pour cela, Orealia est constitué de trois principaux modules qui s?adressent à trois types d?utilisateurs différents :
Orealia|DESIGNER est destinĂ© au designer, ingĂ©nieur ou infographiste pour concevoir et manipuler de façon simple et intuitive des environnements et des modèles en 3D afin de les visualiser naturellement et rapidement avec un rendu en qualitĂ© photo-rĂ©aliste. Le fait d?interagir avec des contenus en temps rĂ©el permet de valider et finaliser les modèles et maquettes virtuelles avec plus d?efficacitĂ©.
Orealia|VIEWER est destinĂ© au commercial ou au responsable de communication ou marketing pour visualiser et Ă©diter interactivement les contenus 3D rĂ©alisĂ©s grâce au module Designer.
Orealia|SDK est destinĂ© au dĂ©veloppeur pour intĂ©grer les technologies innovantes d?Orealia au sein d?applications tierces par des librairies de dĂ©veloppement Ă©crites en C++.
  
Q3Quels sont les formats 3D traités par Orealia Designer ?
A3Le module Designer permet d?importer divers formats natifs et neutres. En standard, le logiciel propose les formats conventionnels comme obj, dxf, dwf, 3ds, fbx ou le collada. En fonction des besoins de l?utilisateur, plusieurs autres formats de type CAO se rajoutent Ă  l?offre initiale comme catia V5, pro/engineer, solidworks, autocad, rhino, iges ou step.
De plus, lorsque la géométrie a été importée et dans le cas où elle aurait évolué, il est intéressant de préciser que notre outil de mise à jour géométrique permet de réimporter le modèle tout en conservant le travail réalisé dans notre solution. Ceci permet aux modèles géométriques d?évoluer indépendamment du travail réalisé sur son design et sa scénarisation par exemple.
  
Q4Comment peut-on créer et appliquer des matériaux réalistes aux objets ?
A4A partir de l?import des objets CAO, la solution Orealia offre plusieurs outils performants pour la création et l?exploitation des matériaux. Il est ainsi possible de choisir parmi plusieurs matériaux de base aux spécificités différentes (brillant, mate, HDR?) et d?en personnaliser l?apparence par plusieurs paramètres intuitifs et simples d?utilisation. Il est par exemple possible de rendre le matériau transparent, d?y ajouter des textures, de tester sa brillance ou encore de lui donner du relief. L?un des gros avantages de notre solution est d?obtenir un résultat instantané de chaque modification et globalement lors de tout le processus de travail sur la maquette. Ceci change radicalement la relation de l?utilisateur avec son logiciel et permet réellement à ce dernier de se concentrer pleinement sur son travail de conception et d?analyse.
  
Q5Combien de matériaux sont fournis en standard ? Peut-on enrichir la bibliothèque ?
A5Outre la création complète d?un matériau, Orealia|DESIGNER est proposé en standard avec une centaine de matériaux prédéfinis dans vingt catégories au sein d?une bibliothèque spécifique. Les catégories sont du type bois, métal brossé, céramique, brique, papier, cuir, marbre etc? Pour importer un de ces matériaux il suffit de le glisser/déposer sur l?objet 3D. La librairie propose également en standard 20 environnements du type HDR afin de tester très facilement et instantanément un objet dans des ambiances et des éclairages différents.
Enfin, lorsqu?un matériau a été créé ou modifié, il est possible de l?ajouter à la librairie pour une utilisation ultérieure dans une autre scène de travail par exemple.
  
Q6Comment obtient-on des objets 3D aussi réalistes ? Qu'appelez-vous un matériau BTF ?
A6Pour cela nous avons développé plusieurs techniques dont un algorithme très innovant avec le laboratoire IRIT, qui se nomme BTF pour Fonctions de Texture Bidirectionnelle. Il s?agit de capturer par un dispositif optique les propriétés optiques d?un échantillon de matériau avec différentes conditions d?éclairage et d?observation. Après traitement par nos algorithmes, les données physiques de ce matériau sont ensuite exploitables en temps réel comme un matériau traditionnel. Ceci s?applique particulièrement bien pour les matériaux comme le velours, la soie, le papier, etc.
Globalement les solutions actuelles de réalité virtuelle interactives proposent l?utilisation d?une simple photographie pour représenter un matériau. Cette approche ne permet pas de caractériser correctement les reflets et les ombrages autoportés inhérents à sa microstructure. Notre technique innovante offre un réalisme beaucoup plus poussé en modélisant ces phénomènes. Le concepteur peut ainsi valider sur sa maquette numérique virtuelle l?apparence de matériaux bien réels.
  
Q7Que peut-on générer comme média à partir d'Orealia ? Peut-on diffuser les contenus 3D sur un large parc de matériels ?
A7L?un des objectifs de la solution Orealia est de permettre le dĂ©ploiement et le partage le plus large possible des scènes et contenus qui ont Ă©tĂ© conçus et validĂ©s. Pour cela nous proposons plusieurs solutions qui peuvent ĂŞtre dĂ©pendantes ou indĂ©pendantes du parc informatique :
La sortie d?images (sans aucun temps de calcul !) dans divers formats et de très grande taille notamment pour l?impression.
La sortie de vidéos aussi bien à la volée (toutes les actions réalisées dans le logiciel sont capturées) que prédéfini par un scénario, avec pour format de sauvegarde, le mpeg, l?avi, le quicktime, le flash ou une séquence d?images pour les intégrer dans un logiciel de montage vidéo.
La génération d?applications exécutables autonomes qui peuvent être distribuées et partagées gratuitement. Echangeable facilement, elles permettent de visualiser le travail sans devoir installer de logiciel spécifique.
Le déploiement vers Orealia|VIEWER qui, sans compétence technique, offre notamment la possibilité de visualiser les variantes d?un produit, de déclencher les scénarios, de bénéficier de l?interactivité et de mémoriser des points de vue pour les restituer et les animer entre eux.
  
Q8Comment positionnez-vous face Ă  des produits comme Lumiscaphe Patchwork, Autodesk Showcase ?
A8Nos produits ne se positionnent pas tout à fait sur les mêmes marchés. Alors que certaines des solutions concurrentes sont principalement destinées au monde industriel (automobile et aéronautique entre autres), la gamme Orealia adresse également les PME/PMI qui fabriquent par exemple des produits manufacturés.
De ce fait, notre solution a un coût d?acquisition moindre avec des performances de rendu et une ergonomie au moins aussi élevée que la concurrence.
De plus, l?ensemble des logiciels de la gamme Orealia fonctionne sur les plateformes Windows, Linux et bientĂ´t Mac.
Enfin, notre stratégie est de proposer des outils qui permettent d?exploiter le plus largement possible et au-delà du bureau d?études le travail qui a été conçu à travers nos solutions.
  
Q9Quels sont les Ă©volutions futures d'Orealia ?
A9Il y a deux types d?Ă©volution dont on peut parler aujourd?hui et qui sont dĂ©jĂ  engagĂ©es :
Tout d?abord le portage de notre gamme de logiciels sur la plateforme Mac OS X d?Apple qui s?ajoute à celles existantes (Windows et Linux). Ensuite et en parallèle, nous développons la version 2 d?Orealia|DESIGNER qui apportera un grand nombre de nouveautés avec notamment la possibilité de déployer le travail réalisé sur le WEB avec toujours la même qualité visuelle, pour par exemple faciliter et améliorer les démarches d?e-commerce de nos clients.
WireFusion 5.0
Soumit par Administrateur le vendredi, 01/11/2007
 Q&AStefano de Carolis
President of Demicron AB
nov. 2007
 

 

 

"Bump mapping and glossiness mapping are two new features that will both increase realism and improve the performance, as you will be able to use fewer polygons in your models. We have also implemented a new edge anti-aliasing that will give more or less the same quality as the full-scene anti-aliasing found in v4, but with up to 100% better performance."

< WireFusion 5

Q1A new release of WireFusion will be soon available, what are the most wanted new features?
A1OpenGL and bump mapping are by far the most requested features.
  
Q2Why should existing users update to WF 5?
A2We have rewritten almost the entire 3D engine and also the WireFusion core engine, with the purpose to prepare WireFusion for the future. Except for better performance, improved workflow and a bunch of new features, it will be much easier and quicker to develop and implement new features for WireFusion 5. This will be obvious in the near future as we will soon release several great features and improvements to WireFusion 5.
  
Q3Which users are you targeting with this new release?
A3Except for our existing target groups, which mostly consist of web and 3D artists, we will focus more on industrial designers, architects and product companies.
  
Q4Concerning the software 3D engine : What are the new features? How can it help 3D artist to bring more realism? Will it be faster than the previous one?
A4Bump mapping and glossiness mapping are two new features that will both increase realism and improve the performance, as you will be able to use fewer polygons in your models. We have also implemented a new edge anti-aliasing that will give more or less the same quality as the full-scene anti-aliasing found in v4, but with up to 100% better performance.
  
Q5Concerning the whole new 3D accelerated engine. How does it work? What kind of performances users can achieve with it (FPS)?
A5You now have the option to publish your presentations with OpenGL acceleration, making it possible to run quite heavy models smoothly, even in full-screen. It is difficult to give exact numbers now, as it of course depends on your hardware, but you will typically be able to run 100.000-500.000 polygons in 10-20 FPS in full-screen.
  
Q6Are all the WF's features available on the accelerated engine?
A6We are still in an early stage with the OpenGL support in WireFusion and will add more features to it in the near future. The goal is to have at least the same features found in the software engine, meaning reflection, bump mapping, animated and interactive textures etc.
  
Q7WF introduce a new SDK. What is the role of it?
A7The SDK is a great way for developers to quickly create advanced features and add-ons both to themselves and to the WireFusion community. We have also opened up and extended the WireFusion API, making it possible for developers to do even more than before.
  
Q8Will WF5 introduce changes on the workflow?
A8At a first look the user interface and workflow might look the same, however, we have implemented some great improvements. For example, we have changed the folder structure and improved the way you group visual contents. We have also made it possible to bundle multiple wires between objects into a single wire. These new improvements will make your projects much cleaner and easier to navigate. Another very useful improvement is that you can now connect, for example, a Button object directly to a diffuse color in-port to set the color. Earlier, you had to put a Color object in between the Button object and the diffuse color in-port.
  
Q9

Collada or FBX becomes more and more used in the DCC area because 3D artist often use several DCC Software. Do you plan to offer more 3D formats?

A9We have plans for FBX. However, I can't really say much more at this time.
  
Q10Java has memory limits for Applet publishing. Could you please tell how the new monitoring tools will works to help WireFusion developers?
A10The new CPU and Memory profiling tools in WireFusion 5 are really great and easy to use. They will help you to find bottlenecks in your projects. The CPU profiler simply finds which objects that consume most CPU, and the Memory profiler lists how much system memory objects and resources take.
Phisics Abstraction Layer (PAL)
Soumit par Administrateur le vendredi, 01/11/2007
 Q&APAL
Adrian Boeing
nov. 2007
 

 

"3D artists are able to take advantage of the PAL technology, through any application that supports PAL. The primary advantage for artists and animators will be the ability to select the underlying physics engine which provides the most visually pleasing results. In this way artists are able to get more control over how objects should react and move in their game or animations.

For example, an animator may wish to animate a wall being hit by a cannon ball. The results from one physics engine might not give the results an animator wants. Too many of the bricks in the wall may be flying in the wrong direction. By switching to a different physics engine, the animator can get more control over how the bricks will move."

< Each image is generated at the same point in time, however each engine provides a different result.

  
Q1Could you please give a brief description of PAL?
A1The Physics Abstraction Layer (PAL) defines a open standard API for exchanging physically based animation content between different content creation packages and physics engines.
  
Q2PAL offers a unique interface for most of the OpenSource engines. What about commercial products such as Havok?
A2Most open source physics engines are supported by PAL, (Bullet, JigLib, ODE, OpenTissue, Tokamak) and PAL supports a number of commercial products as well, including AGEIA PhysX, Newton Game Dynamics, and True Axis Physics. However, PAL does not currently support Havok. We hope to add Havok support to PAL in the future.
  
Q3What are the relations between Collada and PAL? Can PAL be used without Collada?
A3COLLADA is one of the file formats compatible with PAL. PAL itself can be used without COLLADA, either through its own XML file format, or through the Scythe file format, or directly by a custom application.
  
Q4Is PAL for developers or for 3D artists?
A4PAL is for anyone who wishes to make physically based animations, this includes developers, researchers, animators and other 3D artists. However PAL is primarily targeted at software developers.

3D artists are able to take advantage of the PAL technology, through any application that supports PAL. The primary advantage for artists and animators will be the ability to select the underlying physics engine which provides the most visually pleasing results. In this way artists are able to get more control over how objects should react and move in their game or animations.

For example, an animator may wish to animate a wall being hit by a cannon ball. The results from one physics engine might not give the results an animator wants. Too many of the bricks in the wall may be flying in the wrong direction. By switching to a different physics engine, the animator can get more control over how the bricks will move.

  
Q5Is it possible to generate PAL files directly from DCC software such as Maya, 3dsmax or XSI?
A5Since PAL provides support for a number of file formats it is possible to use PAL with any software that supports COLLADA physics or the Scythe physics format. Softimage XSI has native COLLADA support and Feeling Software provides COLLADA plug ins for both Max and Maya. There is a number of other DCC software available with COLLADA support, such as Blender and Houdini. Scythe support is available for 3ds files and Wavefront obj files, so any DCC software that can export to 3ds, obj, or COLLADA can take advantage of PAL.
  
Q6  Is PAL opensource?
A6 Yes, PAL is opensource and released under the BSD license. This means it is free to use in commercial applications. You can download the source code and example applications from the PAL website http://pal.sourceforge.net/
  
Q7What is the future of PAL, can it be included in DCC softwares or 3D engine ?
A7The PAL project is always growing and expanding, and we hope to see PAL directly integrated to a number of DCC packages and 3D engines in the future.
ACVT
Soumit par Administrateur le mardi, 01/10/2007
 Q&AACVT
Anton van den Hengel

Oct. 2007
 

"So VideoTrace is designed to work with whatever video you need to grab an object from, rather than specifically shot image sets. [...] This means that you can create an accurate model from a video in minutes with VideoTrace, whereas using PhotoModeller requires that you provide much more information manually, making it a much more arduous process. "

 

< VideoTrace

  
Q1Can you introduce the VideoTrace project and its purpose?
A1VideoTrace is a system for interactively generating realistic 3D models of objects from video?models that might be inserted into a video game, a simulation environment, or another video sequence. The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more frames of the video. By interpreting the sketch drawn by the user in light of 3D information obtained from computer vision techniques, a small number of simple 2D interactions can be used to generate a realistic 3D model. Each of the sketching operations in VideoTrace provides an intuitive and powerful means of modelling shape from video, and executes quickly enough to be used interactively. Immediate feedback allows the user to model rapidly those parts of the scene which are of interest and to the level of detail required. The combination of automated and manual reconstruction allows VideoTrace to model parts of the scene not visible, and to succeed in cases where purely automated approaches would fail.
  
Q2What is the difference between VideoTrace and existing 3D image-based reconstruction tools such as Realviz ImageModeler?
A2VideoTrace allows the user to model arbitrary objects in normal video. Pretty much any video that you can apply a camera tracker to you can use as input to VideoTrace. That makes it a lot more flexible than something like ImageModeller which requires that you take a set of photographs of the object that you want to model under a very controlled (and constrained) set of circumstances. So VideoTrace is designed to work with whatever video you need to grab an object from, rather than specifically shot image sets. There are some similarities with PhotoModeler, but the interaction in VideoTrace is much more intuitive, and more powerful. This means that you can create an accurate model from a video in minutes with VideoTrace, whereas using PhotoModeller requires that you provide much more information manually, making it a much more arduous process. 
The flexibility and power of the VideoTrace modelling process means that it can be used to generate models for all of the purposes you might find, or per-pixel depth maps for compositing, from whatever video you need to use it on. No special cameras are needed, no laser scans, no tape measures, just a simple intuitive tracing process.
  
Q3The VideoTrace demo is very impressive. How long has the system been developed?
A3The system has been in active development for 2 years, but it builds on work that has been carried out by the group over more than 10 years in the area. We've just (as in minutes ago) released the first beta version to a limited set of testers, so it's certainly a very usable system, which we're hoping to keep developing for a few years to come.
  
Q4Such an application must be very CPU-hungry. Can you tell us the hardware required to run VideoTrace? Does its architecture take advantage of multi-core processors?
A4It's certainly not going to run on your cell phone any time in the near future, but you really don't need to have all that powerful a machine to use it. Most of the work I've done with it has been on my laptop, which really isn't anything special. All you really need is a graphics card with OpenGL support, and it doesn't even need to be the latest version.
  
Q5In which format(s) does VideoTrace export the 3D models?
A5VideoTrace exports in a few formats, but the most useful one is VRML. Most packages import VRML, and there are translators for pretty much every other relevant format that you can think of. The current Beta has a limited set of import formats, but the next version will import from more of the camera trackers. We've implemented the functionality already, it's just a question of documenting it really.
  
Q6How did researchers from The Australian Centre for Visual Technologies and The Oxford Brookes Computer Vision Group meet around this project?
A6

We've known each other for a surprisingly long time, but really the primary motivator was that Phil Torr (from Oxford Brookes) partly supervised the PhD of Anthony Dick (from Adelaide). The collaboration has been extremely positive, and one for which we've just got and other 3 years research funding.

  
Q7Does VideoTrace make use of existing software libraries ? Can you tell us which ones?
A7It uses SSL and QT, but that's about it.
  
Q8VideoTrace technology could provide a tremendous addition to existing 3D modeling packages. Have you already been contacted by open source or commercial software providers?
A8Yes we're negotiating with quite a large number of companies about the future of VideoTrace, but we haven't decided on anything yet. It seems to take a while to get these things organised.
  
Q9What improvements are to be done on the VideoTrace system? Can you give us a brief roadmap?
A9We've got a lot planned for improving the fidelity and flexibility of VideoTrace over the next few years. We're looking initially at using interactive dense matching (a technique from computer vision) to improve the way we handle curved surfaces. We're looking at how we might interactively de- and re-light objects which are cut and paste between video sequences. We may look at interactive camera tracking, there's a long list.
Serious Games
Soumit par Administrateur le mardi, 01/10/2007
 Q&AVirginie Vega, 
Chef de projet Serious Games Sessions Europe,

Oct. 2007
 

"En plus des traditionnels secteurs de la défense et de la sécurité civile, les Serious Games sont désormais utilisés par les professionnels de la santé, de la Science, par les collectivités publiques mais aussi par de nombreux organismes de formation ou toute autre industrie généraliste."

Serious Games 2006

  
Q1A qui s'adresse le Serious Games Sessions Europe ?
A1Tout d?abord, il s?agit d?un salon professionnel qui se tiendra le 3 dĂ©cembre 2007 Ă  Lyon (CitĂ© des Congrès). 
L'Ă©vĂ©nement cible en prioritĂ© le secteur du jeu vidĂ©o. Il concerne Ă©galement un large panorama venu des secteurs classiques ou non technologiques comme les Administrations militaires, Sanitaires, SĂ©curitĂ© civile, CollectivitĂ©s, Entreprises?
  
Q2Quels sont les points forts de la manifestation ?
A2L?objectif du Serious Games Sessions : Ă©changer, dĂ©battre, dĂ©couvrir et tester des solutions utilisant les technologies du jeu vidĂ©o.
La manifestation s?appuiera sur les points forts qui ont fait son succès : une série de conférences animée par les plus grands experts du secteur, un espace d?exposition ouvert à tous, et des démonstrations accessibles tout au long de l?événement. Le but étant d?offrir le maximum d?information aux visiteurs. En plus de découvrir un large panel d?applications, les visiteurs et participants auront la possibilité de rencontrer les plus grands professionnels du secteur.
  
Q3On connaît les applications historiques du Serious Gaming (simulateurs). Quelles sont les nouvelles applications ?
A3

En effet l?idée n?est pas nouvelle. Largement utilisés par l?armée américaine afin de rendre le secteur de la défense plus attractif ou encore par l?industrie aéronautique ou automobile au travers de simulateurs virtuels, les Serious Games ne se réduisent plus à ces applications.
En plus des traditionnels secteurs de la défense et de la sécurité civile, les Serious Games sont désormais utilisés par les professionnels de la santé, de la Science, par les collectivités publiques mais aussi par de nombreux organismes de formation ou toute autre industrie généraliste. Les jeux éducatifs classiques, apparus dans les années 1970-1980, s'adressaient aux enfants et adolescents. Les Serious Games, eux, proposent une réelle formation qui ne s'adresse pas seulement aux enfants mais à une cible beaucoup plus large, y compris corporate. Notons que plus de 40% du marché américain de la formation utilisera la simulation en 2008, selon les estimations du cabinet d?études International Data Corporation. Il est important également de préciser que le Serious Gaming ne se limite plus à la simulation : l'utilisation des mondes multijoueurs en ligne pour la formation et l?apprentissage en est la principale illustration.

  
Q4Les éditeurs de jeux vidéo s'intéressent-t-ils à ce marché ?
A4

En réalité, non. Les Serious Games n?ont rien à voir avec le métier des éditeurs qui travaillent au service du divertissement et du grand public. Rappelons que les Serious Games sont des produits sur mesure qui s?adressent aux professionnels. Les logiciels sont développés à la demande des entreprises pour leurs salariés. Les studios de développement peuvent se repositionner sur le Serious Gaming et y percevoir des opportunités de croissance, mais les éditeurs, eux, ne sont pas concernés par ce marché.

  
Q5Quels acteurs seront présents dans cette édition ?
A5

Deux grands studios français de Serious Games participeront à cette 3e édition. Accompagnés de leurs clients (respectivement AXA et l?Oréal), Daesign et Net Division exploreront les enjeux du Serious Gaming et présenteront les résultats obtenus par les utilisateurs finaux de Serious Games en s?appuyant sur des démonstrations. Les deux sociétés tenteront de démontrer en quoi le Serious Gaming a répondu à leurs attentes en terme de formation, et comment elles comptent poursuivre dans cette nouvelle approche. L?ESC Chambéry sera également là afin d?exposer ses travaux de recherche sur cette thématique.

  
Q6Serious Games Session est un événement national ou bien international ?
A6Face au succès rencontré par les précédentes éditions, Lyon Game a décidé de développer la manifestation et de l?étendre à l?ensemble des développeurs nationaux et européens. Aujourd?hui le Serious Gaming draine un grand nombre d?acteurs nationaux et internationaux. Dans cette optique d?ouverture, l?appel à projets a été lancé afin de recruter les meilleurs intervenants internationaux. Cette année de nombreux experts tels que Doug Wathley (Etats-Unis), Kam Memarzia (UK), Per Backlund (Université de Skövde ? Suède), Wi Jong-Hyun (Chung-Ang University ? Corée du Sud) seront présents. Le Serious Games Session 2007 sera placé sous le signe de la diversité !
  
Q7Les applications "3D enrichie", "Réalité Virtuelle" sont-elles des Serious Games ? Comment définit-on ce nouveau concept ?
A7Les Réalités Virtuelles qui utilisent les technologies et savoir-faire issus du jeu vidéo (que ce soit le flash ou autre) peuvent effectivement être considérées comme des Serious Games. Cependant, certaines Realités Virtuelles ne font pas appel à ces technologies. Ces dernières n?appartiennent donc pas toutes au monde du Serious Gaming.
ACVT
Soumit par Administrateur le mardi, 01/10/2007
 Q&AACVT
Anton van den Hengel

Oct. 2007
 

"So VideoTrace is designed to work with whatever video you need to grab an object from, rather than specifically shot image sets. [...] This means that you can create an accurate model from a video in minutes with VideoTrace, whereas using PhotoModeller requires that you provide much more information manually, making it a much more arduous process. "

 

< VideoTrace

  
Q1Can you introduce the VideoTrace project and its purpose?
A1VideoTrace is a system for interactively generating realistic 3D models of objects from video?models that might be inserted into a video game, a simulation environment, or another video sequence. The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more frames of the video. By interpreting the sketch drawn by the user in light of 3D information obtained from computer vision techniques, a small number of simple 2D interactions can be used to generate a realistic 3D model. Each of the sketching operations in VideoTrace provides an intuitive and powerful means of modelling shape from video, and executes quickly enough to be used interactively. Immediate feedback allows the user to model rapidly those parts of the scene which are of interest and to the level of detail required. The combination of automated and manual reconstruction allows VideoTrace to model parts of the scene not visible, and to succeed in cases where purely automated approaches would fail.
  
Q2What is the difference between VideoTrace and existing 3D image-based reconstruction tools such as Realviz ImageModeler?
A2VideoTrace allows the user to model arbitrary objects in normal video. Pretty much any video that you can apply a camera tracker to you can use as input to VideoTrace. That makes it a lot more flexible than something like ImageModeller which requires that you take a set of photographs of the object that you want to model under a very controlled (and constrained) set of circumstances. So VideoTrace is designed to work with whatever video you need to grab an object from, rather than specifically shot image sets. There are some similarities with PhotoModeler, but the interaction in VideoTrace is much more intuitive, and more powerful. This means that you can create an accurate model from a video in minutes with VideoTrace, whereas using PhotoModeller requires that you provide much more information manually, making it a much more arduous process. 
The flexibility and power of the VideoTrace modelling process means that it can be used to generate models for all of the purposes you might find, or per-pixel depth maps for compositing, from whatever video you need to use it on. No special cameras are needed, no laser scans, no tape measures, just a simple intuitive tracing process.
  
Q3The VideoTrace demo is very impressive. How long has the system been developed?
A3The system has been in active development for 2 years, but it builds on work that has been carried out by the group over more than 10 years in the area. We've just (as in minutes ago) released the first beta version to a limited set of testers, so it's certainly a very usable system, which we're hoping to keep developing for a few years to come.
  
Q4Such an application must be very CPU-hungry. Can you tell us the hardware required to run VideoTrace? Does its architecture take advantage of multi-core processors?
A4It's certainly not going to run on your cell phone any time in the near future, but you really don't need to have all that powerful a machine to use it. Most of the work I've done with it has been on my laptop, which really isn't anything special. All you really need is a graphics card with OpenGL support, and it doesn't even need to be the latest version.
  
Q5In which format(s) does VideoTrace export the 3D models?
A5VideoTrace exports in a few formats, but the most useful one is VRML. Most packages import VRML, and there are translators for pretty much every other relevant format that you can think of. The current Beta has a limited set of import formats, but the next version will import from more of the camera trackers. We've implemented the functionality already, it's just a question of documenting it really.
  
Q6How did researchers from The Australian Centre for Visual Technologies and The Oxford Brookes Computer Vision Group meet around this project?
A6

We've known each other for a surprisingly long time, but really the primary motivator was that Phil Torr (from Oxford Brookes) partly supervised the PhD of Anthony Dick (from Adelaide). The collaboration has been extremely positive, and one for which we've just got and other 3 years research funding.

  
Q7Does VideoTrace make use of existing software libraries ? Can you tell us which ones?
A7It uses SSL and QT, but that's about it.
  
Q8VideoTrace technology could provide a tremendous addition to existing 3D modeling packages. Have you already been contacted by open source or commercial software providers?
A8Yes we're negotiating with quite a large number of companies about the future of VideoTrace, but we haven't decided on anything yet. It seems to take a while to get these things organised.
  
Q9What improvements are to be done on the VideoTrace system? Can you give us a brief roadmap?
A9We've got a lot planned for improving the fidelity and flexibility of VideoTrace over the next few years. We're looking initially at using interactive dense matching (a technique from computer vision) to improve the way we handle curved surfaces. We're looking at how we might interactively de- and re-light objects which are cut and paste between video sequences. We may look at interactive camera tracking, there's a long list.
Serious Games
Soumit par Administrateur le mardi, 01/10/2007
 Q&AVirginie Vega, 
Chef de projet Serious Games Sessions Europe,

Oct. 2007
 

"En plus des traditionnels secteurs de la défense et de la sécurité civile, les Serious Games sont désormais utilisés par les professionnels de la santé, de la Science, par les collectivités publiques mais aussi par de nombreux organismes de formation ou toute autre industrie généraliste."

Serious Games 2006

  
Q1A qui s'adresse le Serious Games Sessions Europe ?
A1Tout d?abord, il s?agit d?un salon professionnel qui se tiendra le 3 dĂ©cembre 2007 Ă  Lyon (CitĂ© des Congrès). 
L'Ă©vĂ©nement cible en prioritĂ© le secteur du jeu vidĂ©o. Il concerne Ă©galement un large panorama venu des secteurs classiques ou non technologiques comme les Administrations militaires, Sanitaires, SĂ©curitĂ© civile, CollectivitĂ©s, Entreprises?
  
Q2Quels sont les points forts de la manifestation ?
A2L?objectif du Serious Games Sessions : Ă©changer, dĂ©battre, dĂ©couvrir et tester des solutions utilisant les technologies du jeu vidĂ©o.
La manifestation s?appuiera sur les points forts qui ont fait son succès : une série de conférences animée par les plus grands experts du secteur, un espace d?exposition ouvert à tous, et des démonstrations accessibles tout au long de l?événement. Le but étant d?offrir le maximum d?information aux visiteurs. En plus de découvrir un large panel d?applications, les visiteurs et participants auront la possibilité de rencontrer les plus grands professionnels du secteur.
  
Q3On connaît les applications historiques du Serious Gaming (simulateurs). Quelles sont les nouvelles applications ?
A3

En effet l?idée n?est pas nouvelle. Largement utilisés par l?armée américaine afin de rendre le secteur de la défense plus attractif ou encore par l?industrie aéronautique ou automobile au travers de simulateurs virtuels, les Serious Games ne se réduisent plus à ces applications.
En plus des traditionnels secteurs de la défense et de la sécurité civile, les Serious Games sont désormais utilisés par les professionnels de la santé, de la Science, par les collectivités publiques mais aussi par de nombreux organismes de formation ou toute autre industrie généraliste. Les jeux éducatifs classiques, apparus dans les années 1970-1980, s'adressaient aux enfants et adolescents. Les Serious Games, eux, proposent une réelle formation qui ne s'adresse pas seulement aux enfants mais à une cible beaucoup plus large, y compris corporate. Notons que plus de 40% du marché américain de la formation utilisera la simulation en 2008, selon les estimations du cabinet d?études International Data Corporation. Il est important également de préciser que le Serious Gaming ne se limite plus à la simulation : l'utilisation des mondes multijoueurs en ligne pour la formation et l?apprentissage en est la principale illustration.

  
Q4Les éditeurs de jeux vidéo s'intéressent-t-ils à ce marché ?
A4

En réalité, non. Les Serious Games n?ont rien à voir avec le métier des éditeurs qui travaillent au service du divertissement et du grand public. Rappelons que les Serious Games sont des produits sur mesure qui s?adressent aux professionnels. Les logiciels sont développés à la demande des entreprises pour leurs salariés. Les studios de développement peuvent se repositionner sur le Serious Gaming et y percevoir des opportunités de croissance, mais les éditeurs, eux, ne sont pas concernés par ce marché.

  
Q5Quels acteurs seront présents dans cette édition ?
A5

Deux grands studios français de Serious Games participeront à cette 3e édition. Accompagnés de leurs clients (respectivement AXA et l?Oréal), Daesign et Net Division exploreront les enjeux du Serious Gaming et présenteront les résultats obtenus par les utilisateurs finaux de Serious Games en s?appuyant sur des démonstrations. Les deux sociétés tenteront de démontrer en quoi le Serious Gaming a répondu à leurs attentes en terme de formation, et comment elles comptent poursuivre dans cette nouvelle approche. L?ESC Chambéry sera également là afin d?exposer ses travaux de recherche sur cette thématique.

  
Q6Serious Games Session est un événement national ou bien international ?
A6Face au succès rencontré par les précédentes éditions, Lyon Game a décidé de développer la manifestation et de l?étendre à l?ensemble des développeurs nationaux et européens. Aujourd?hui le Serious Gaming draine un grand nombre d?acteurs nationaux et internationaux. Dans cette optique d?ouverture, l?appel à projets a été lancé afin de recruter les meilleurs intervenants internationaux. Cette année de nombreux experts tels que Doug Wathley (Etats-Unis), Kam Memarzia (UK), Per Backlund (Université de Skövde ? Suède), Wi Jong-Hyun (Chung-Ang University ? Corée du Sud) seront présents. Le Serious Games Session 2007 sera placé sous le signe de la diversité !
  
Q7Les applications "3D enrichie", "Réalité Virtuelle" sont-elles des Serious Games ? Comment définit-on ce nouveau concept ?
A7Les Réalités Virtuelles qui utilisent les technologies et savoir-faire issus du jeu vidéo (que ce soit le flash ou autre) peuvent effectivement être considérées comme des Serious Games. Cependant, certaines Realités Virtuelles ne font pas appel à ces technologies. Ces dernières n?appartiennent donc pas toutes au monde du Serious Gaming.
KWorld
Soumit par Administrateur le jeudi, 01/08/2007
 Q&AKWorld
Petr
August 2007
 

 

 

 

"You can create virtual 3D presentations of various environments, tools, working sets, etc. You can present your car or your kitchen in 3D as executable presentation, screensaver or even in html page."

 

< Editeur K-World

  
Q1Could you please give a brief decription of KWorld?
A1KWorld is a realtime 3D scene editor, oriented to small interactive environments. It is a scene editor, not a model editor, thus you only import models, create effects, make project logic and present the result.
  
Q2What kind of 3D content can be created with KWorld?
A2You can create virtual 3D presentations of various environments, tools, working sets, etc. You can present your car or your kitchen in 3D as executable presentation, screensaver or even in html page.
  
Q3Is it user-freindly, what is the learning curve of KWorld?
A3It is targeted for non programmer users, I hope that everyone can make some nice scene himself after completing tutorials on the webpage.
  
Q4How can a graphist artist create interactions with objects?
A4All interaction and scene logic is done visually in KWorld, thus he dont need to know any scripting or programming language. He simply drag & drop the 3D object to logic view in KWorld and connect proper connectors of entities to make requested interaction.
  
Q5KWorld 3D presentation can be embedded in a web page, do you plan to support browser surch as Firefox?
A5I was not able to make it work in Firefox. If you can make ActiveX working in FF, your KW presentation should work. This page could help http://www.iol.ie/~locka/mozilla/plugin.htm .
  
Q6KWorld has an open interface with plugin, does it mean that it came with a full SDK?
A6I am planning to publish all plugins source codes, but first I have to clean the source code from some nasty words ;) A lot of things is made by plugins in KW and I hope this feature can help many people with their own specialized 3D presentation needs.
  
Q7Kworld import geometry as DirectX. Does it support characters with animated bones?
A7Yes, it support character import and animations. It does not support bone editing.
  
Q8What are the main features of Kworld regarding 3D rendering (antiliasing, shaders...)?
A8These features are mainly made by plugins. Antialising is not supported by default, but can be enabled in a config file. Main current features can be particle system, shaders, sky boxes, volume textures or PRT. Nice effect is also 3D sound object.
  
Q9KWorld is a freeware, can it be used for commercial projects?
A9Sure, but I would like to know it :)
AMD : Making of Ryby
Soumit par Administrateur le lundi, 01/07/2007
 Q&ACallan McInally from AMD
ATI/AMD Ruby Demo
july 2007
 

"Our demo engine, called the Sushi Engine, was designed and developed in house. Creating our own engine allows us to target and focus on the cutting edge of real-time computer graphics hardware (unlike game engines that have to support multiple generations) and also allows us to gain valuable insights into the fine art of graphics engine architecting and development. "

I

Q1The latest ATI Radeon Ruby:Whiteout demo pushes the boundaries of ultra realism. What is ATI's aim in producing such technology demos?
A1

Get to know the challenges that next-gen game developers will be facing, solve some of the problems that current-gen game developers are already facing, show case the power and features of our latest GPUs, but most of all? we do it because we love graphics and it?s just too much fun to pass up.

  
Q2How much time did the making of the demo take?
A2About 2 years (Spring 2005 -> Spring 2007) but this also includes developing a new engine and new art tools (which we will re-use for the next few years/demos/etc).

 

  
Q3Does the demo make use of a home-made, ATI-internal 3D engine? Or does it leverage a game engine such as the Unreal Engine or the CryEngine?
A3Our demo engine, called the Sushi Engine, was designed and developed in house. Creating our own engine allows us to target and focus on the cutting edge of real-time computer graphics hardware (unlike game engines that have to support multiple generations) and also allows us to gain valuable insights into the fine art of graphics engine architecting and development. We can experiment with new rendering techniques in a relatively low-risk environment (a game production environment is often hectic and it can be difficult to find time to try new things that may not work out in the end). The lessons we learn become valuable information for our hardware and software architects as well as external game developers.
  
Q4In the demo, the snow's rendering is quite astonishing. How did you reach such quality level?
A4First, we have fantastic artists who aren?t scared of diving into the shader code to make adjustments if they see fit. Second, the HD 2xxx series provides enormous amounts of compute power which enabled us to create highly configurable, procedural shaders which gives the mountains a very natural and non-repetitive look. If the mountains were to be hand-painted, the quality would have been much lower? there simply isn?t enough time in the day to paint all that detail into all those mountains. This kind of technique is exactly how such a virtual landscape would be created for a feature film.

In order to achieve the right look, we used subsurface scattering techniques to simulate the complex interactions that occur between snow and light. In addition to subsurface scattering, more advanced lighting models were used. Anytime you are lighting something outdoors it is important to take sky light into account. When light from the sun reaches earth, it scatters due to the gases and other particles that make up our atmosphere. So when you place an object outdoors, there?s light coming at it from all different directions (not just the direction of the sun) and so the sky itself acts as a giant blue-ish area light source.

  
Q5The skin and lips of the Ruby character seem to come right from an animation feature film. Is it difficult to achieve?
A5Animated characters are always a challenge. The Ruby character uses many of the same animation techniques that are employed by the film industry. Our artists created an enormous set of face shapes (which you can think of as poses or expressions) and then they pick and choose from these shapes blending in a smile here and a wink there to build up each and every frame of animation. To speed up the process we partnered with ImageMetrics a company that captures the facial performance of real actors using digital video and then uses complex algorithms (similar to facial recognition) to pick and chose the artist created face shapes and then blend them together to mimic our actress?s performance. In addition to all this, we also developed a technique that allows artists to animate facial wrinkles on Ruby. Though it may seem like a minor detail, facial wrinkles are an important part of facial expression. A wrinkled brow is all that?s needed to let you know that a person is deep in thought and a small crease on Ruby?s cheek helps let you know that her smile is genuine.
  
Q6Some methods used to be available only in pre-calculated rendering, i.e. Sub-Surface Scattering, Ambient Occlusion, Environment Reflection. Are they now available to realtime content makers?
A6Yes, in fact all three of your examples are used in the Whiteout demo. These technologies are available to realtime content makers but these techniques are also constantly improving. For example, we are on to our 4th generation of subsurface scattering technology for human skin (Ruby1 -> Ruby4). All three of those techniques fall under the ?global illumination? umbrella (where the lighting calculation performed at the surface of an object depends on the global, surrounding scene) and this area of real-time computer graphics continues to be a hot area of research with new advances being developed all the time.
  
Q7Which DCC tools were used in the making of this demo?
A7Maya, ZBrush, Modo, World Machine, Photoshop, and some of our own custom tools such as CubeMapGen, ATI Normal Mapper, and Tootle.
  
Q8Can this demo run in real time on a Radeon HD 2900XT?
A8Yes.
  
Q9Can we expect games and real time productions to reach such visual excellence? What will be left to the pre-calculated 3D field?
A9Absolutely.There?s still plenty for offline simulation problems to tackle. Rendering is one aspect of offline CG. There?s also physics and other forms of simulation. Interestingly, one shift we are seeing in the offline rendering field is that large scale CPU based render-farms are being replaced by arrays of GPUs. This makes sense because of the vast compute power offered by GPUs.
AMD : Making of Ryby
Soumit par Administrateur le lundi, 01/07/2007
 Q&ACallan McInally from AMD
ATI/AMD Ruby Demo
july 2007
 

"Our demo engine, called the Sushi Engine, was designed and developed in house. Creating our own engine allows us to target and focus on the cutting edge of real-time computer graphics hardware (unlike game engines that have to support multiple generations) and also allows us to gain valuable insights into the fine art of graphics engine architecting and development. "

I

Q1The latest ATI Radeon Ruby:Whiteout demo pushes the boundaries of ultra realism. What is ATI's aim in producing such technology demos?
A1

Get to know the challenges that next-gen game developers will be facing, solve some of the problems that current-gen game developers are already facing, show case the power and features of our latest GPUs, but most of all? we do it because we love graphics and it?s just too much fun to pass up.

  
Q2How much time did the making of the demo take?
A2About 2 years (Spring 2005 -> Spring 2007) but this also includes developing a new engine and new art tools (which we will re-use for the next few years/demos/etc).

 

  
Q3Does the demo make use of a home-made, ATI-internal 3D engine? Or does it leverage a game engine such as the Unreal Engine or the CryEngine?
A3Our demo engine, called the Sushi Engine, was designed and developed in house. Creating our own engine allows us to target and focus on the cutting edge of real-time computer graphics hardware (unlike game engines that have to support multiple generations) and also allows us to gain valuable insights into the fine art of graphics engine architecting and development. We can experiment with new rendering techniques in a relatively low-risk environment (a game production environment is often hectic and it can be difficult to find time to try new things that may not work out in the end). The lessons we learn become valuable information for our hardware and software architects as well as external game developers.
  
Q4In the demo, the snow's rendering is quite astonishing. How did you reach such quality level?
A4First, we have fantastic artists who aren?t scared of diving into the shader code to make adjustments if they see fit. Second, the HD 2xxx series provides enormous amounts of compute power which enabled us to create highly configurable, procedural shaders which gives the mountains a very natural and non-repetitive look. If the mountains were to be hand-painted, the quality would have been much lower? there simply isn?t enough time in the day to paint all that detail into all those mountains. This kind of technique is exactly how such a virtual landscape would be created for a feature film.

In order to achieve the right look, we used subsurface scattering techniques to simulate the complex interactions that occur between snow and light. In addition to subsurface scattering, more advanced lighting models were used. Anytime you are lighting something outdoors it is important to take sky light into account. When light from the sun reaches earth, it scatters due to the gases and other particles that make up our atmosphere. So when you place an object outdoors, there?s light coming at it from all different directions (not just the direction of the sun) and so the sky itself acts as a giant blue-ish area light source.

  
Q5The skin and lips of the Ruby character seem to come right from an animation feature film. Is it difficult to achieve?
A5Animated characters are always a challenge. The Ruby character uses many of the same animation techniques that are employed by the film industry. Our artists created an enormous set of face shapes (which you can think of as poses or expressions) and then they pick and choose from these shapes blending in a smile here and a wink there to build up each and every frame of animation. To speed up the process we partnered with ImageMetrics a company that captures the facial performance of real actors using digital video and then uses complex algorithms (similar to facial recognition) to pick and chose the artist created face shapes and then blend them together to mimic our actress?s performance. In addition to all this, we also developed a technique that allows artists to animate facial wrinkles on Ruby. Though it may seem like a minor detail, facial wrinkles are an important part of facial expression. A wrinkled brow is all that?s needed to let you know that a person is deep in thought and a small crease on Ruby?s cheek helps let you know that her smile is genuine.
  
Q6Some methods used to be available only in pre-calculated rendering, i.e. Sub-Surface Scattering, Ambient Occlusion, Environment Reflection. Are they now available to realtime content makers?
A6Yes, in fact all three of your examples are used in the Whiteout demo. These technologies are available to realtime content makers but these techniques are also constantly improving. For example, we are on to our 4th generation of subsurface scattering technology for human skin (Ruby1 -> Ruby4). All three of those techniques fall under the ?global illumination? umbrella (where the lighting calculation performed at the surface of an object depends on the global, surrounding scene) and this area of real-time computer graphics continues to be a hot area of research with new advances being developed all the time.
  
Q7Which DCC tools were used in the making of this demo?
A7Maya, ZBrush, Modo, World Machine, Photoshop, and some of our own custom tools such as CubeMapGen, ATI Normal Mapper, and Tootle.
  
Q8Can this demo run in real time on a Radeon HD 2900XT?
A8Yes.
  
Q9Can we expect games and real time productions to reach such visual excellence? What will be left to the pre-calculated 3D field?
A9Absolutely.There?s still plenty for offline simulation problems to tackle. Rendering is one aspect of offline CG. There?s also physics and other forms of simulation. Interestingly, one shift we are seeing in the offline rendering field is that large scale CPU based render-farms are being replaced by arrays of GPUs. This makes sense because of the vast compute power offered by GPUs.
MADLIX
Soumit par Administrateur le mercredi, 01/05/2007
 Q&AKhashayar Farmanbar
Chief Executive Officer
Agency9
May 2007
 

 

 

"The idea is to let users insert 3D into their own spaces, such as web pages, google pages, blogs, portfolio pages, fan pages and more."

 

MADLIX.com

  
Q1What is MADLIX?
A1The idea is to let users insert 3D into their own spaces, such as web pages, google pages, blogs, portfolio pages, fan pages and more. We want the whole internet community to be able to take advantage of the solution; hence we made sure to find a way of connecting 3D artists with end-users.
MADLIX consists of a 3D player that runs smoothly inside all Java-enabled browsers with no need for custom plug-in or application installation. It uses OpenGL to ensure high performance but also offer a fall-back to software rendering if the hardware doesn't support 3D acceleration.
The MADLIX gallery at www.madlix.com is the heart of the product, where 3D artists can submit their artwork to and it is free for everyone to insert the 3D content of their choice into their web space.
  
Q2How can I publish my 3d models on the web?
A2We provide a simple and powerful tool for artists to publish their 3D artwork on MADLIX. It is available for download at the MADLIX website.
The exporter tool handles COLLADA files and also contains a plug-in for Autodesk Maya. The tool also includes a standalone viewer that handles MADLIX files (.mlx) as well as COLLADA files (.dae).
We?ve chosen to use the MADLIX file format, which is secured and encrypted in order to ensure file integrity and security for artists who do not want to see their artwork wonder around. Gradually we will add more functionality; one will probably be the option of publishing 3D artwork in the COLLADA format.
  
Q3Can I publish the 3D models on my blog?
A3

Yes. That is one of the main features with MADLIX.

We discovered that many web sites were like a candy display window. You can look at the content, but not take them with you. MADLIX is the candy shop where you can take what you see with you to any web space. We?re constantly adding support for communities and sites, and you can easily insert the artwork by copying the embed tag and paste it into the html-code of any web page.

  
Q4Why using COLLADA?
A4We want to offer as easy and reliable way as possible for 3D artists to work with MADLIX. During the last year COLLADA has gained massive support and there are tools available for almost all major 3D DCC tools to export content to the COLLADA format. Agency9 was one of the first companies to fully commit to COLLADA. We believe that standards that are available to anyone help creating a healthy industry.
  
Q5I'm using Maya, what are the different steps for publishing 3D?
A51. Download and install the MADLIX export wizard. 

2. Create you 3D model, animations, etc. Click the preview button to see how the result will be, you can preview the whole scene or only selected objects.

3. When you are satisfied you can click the export button, again you can choose export all or export selected. Follow the instructions on the screen.
  
Q6Can I upload very large files?
A6Yes, we have no limitations on file size. The only drawback is that the model download time gets longer, and that lower end machines might have trouble rendering the model at decent frame rates. Generally we recommend keeping the size around 0.5 - 1 MB.
  
Q7Is it possible to publish animate characters?
A7Yes, character skinning and animation is fully supported. Characters and meshes that moves might wonder off the screen of course. For now we have not engaged tracking, so users might have to keep track of the characters themselves.
  
Q8What would be the extended features of MADLIX pro?
A8All the features for the coming MADLIX Professional are not set yet. They will be decided and finalized based on the feedback we receive from users and partners that use MADLIX. One of the features that we know is the possibility to publish 3D content without submitting it to the MADLIX content gallery. We would appreciate feedback on what features professional users would like to see in the coming MADLIX Pro.
  
Q9About AgentFX, is it possible to create online games with modern features: physics, shaders, shadow, AI?
A9Yes, definitely. AgentFX v.3 has built in support for most major shading langues such as Cg and GLSL as well as other high-end feature such as shadows, HDR and water rendering.
AgentFX is a graphics engine and for features that lay outside of the graphics domain you may need to use other 3rd party tools. As an example AgentFX has been successfully used with both PhysX and ODE for physics as well as with Mathlab for more advanced simulations.
  
Q10 AgentFX is based on Java. Are developing costs cheaper than C++ engines?
A10From our own experience developing in Java is much faster than traditional languages like C/C++. Development in a memory managed environments often tend to be much less error prone and makes debugging easier.
  
Q11Is it possible to publish AgentFX content in a web page (Applet) like MADLIX?
A11Yes, with AgentFX v.3 which powers MADLIX.
MADLIX
Soumit par Administrateur le mercredi, 01/05/2007
 Q&AKhashayar Farmanbar
Chief Executive Officer
Agency9
May 2007
 

 

 

"The idea is to let users insert 3D into their own spaces, such as web pages, google pages, blogs, portfolio pages, fan pages and more."

 

MADLIX.com

  
Q1What is MADLIX?
A1The idea is to let users insert 3D into their own spaces, such as web pages, google pages, blogs, portfolio pages, fan pages and more. We want the whole internet community to be able to take advantage of the solution; hence we made sure to find a way of connecting 3D artists with end-users.
MADLIX consists of a 3D player that runs smoothly inside all Java-enabled browsers with no need for custom plug-in or application installation. It uses OpenGL to ensure high performance but also offer a fall-back to software rendering if the hardware doesn't support 3D acceleration.
The MADLIX gallery at www.madlix.com is the heart of the product, where 3D artists can submit their artwork to and it is free for everyone to insert the 3D content of their choice into their web space.
  
Q2How can I publish my 3d models on the web?
A2We provide a simple and powerful tool for artists to publish their 3D artwork on MADLIX. It is available for download at the MADLIX website.
The exporter tool handles COLLADA files and also contains a plug-in for Autodesk Maya. The tool also includes a standalone viewer that handles MADLIX files (.mlx) as well as COLLADA files (.dae).
We?ve chosen to use the MADLIX file format, which is secured and encrypted in order to ensure file integrity and security for artists who do not want to see their artwork wonder around. Gradually we will add more functionality; one will probably be the option of publishing 3D artwork in the COLLADA format.
  
Q3Can I publish the 3D models on my blog?
A3

Yes. That is one of the main features with MADLIX.

We discovered that many web sites were like a candy display window. You can look at the content, but not take them with you. MADLIX is the candy shop where you can take what you see with you to any web space. We?re constantly adding support for communities and sites, and you can easily insert the artwork by copying the embed tag and paste it into the html-code of any web page.

  
Q4Why using COLLADA?
A4We want to offer as easy and reliable way as possible for 3D artists to work with MADLIX. During the last year COLLADA has gained massive support and there are tools available for almost all major 3D DCC tools to export content to the COLLADA format. Agency9 was one of the first companies to fully commit to COLLADA. We believe that standards that are available to anyone help creating a healthy industry.
  
Q5I'm using Maya, what are the different steps for publishing 3D?
A51. Download and install the MADLIX export wizard. 

2. Create you 3D model, animations, etc. Click the preview button to see how the result will be, you can preview the whole scene or only selected objects.

3. When you are satisfied you can click the export button, again you can choose export all or export selected. Follow the instructions on the screen.
  
Q6Can I upload very large files?
A6Yes, we have no limitations on file size. The only drawback is that the model download time gets longer, and that lower end machines might have trouble rendering the model at decent frame rates. Generally we recommend keeping the size around 0.5 - 1 MB.
  
Q7Is it possible to publish animate characters?
A7Yes, character skinning and animation is fully supported. Characters and meshes that moves might wonder off the screen of course. For now we have not engaged tracking, so users might have to keep track of the characters themselves.
  
Q8What would be the extended features of MADLIX pro?
A8All the features for the coming MADLIX Professional are not set yet. They will be decided and finalized based on the feedback we receive from users and partners that use MADLIX. One of the features that we know is the possibility to publish 3D content without submitting it to the MADLIX content gallery. We would appreciate feedback on what features professional users would like to see in the coming MADLIX Pro.
  
Q9About AgentFX, is it possible to create online games with modern features: physics, shaders, shadow, AI?
A9Yes, definitely. AgentFX v.3 has built in support for most major shading langues such as Cg and GLSL as well as other high-end feature such as shadows, HDR and water rendering.
AgentFX is a graphics engine and for features that lay outside of the graphics domain you may need to use other 3rd party tools. As an example AgentFX has been successfully used with both PhysX and ODE for physics as well as with Mathlab for more advanced simulations.
  
Q10 AgentFX is based on Java. Are developing costs cheaper than C++ engines?
A10From our own experience developing in Java is much faster than traditional languages like C/C++. Development in a memory managed environments often tend to be much less error prone and makes debugging easier.
  
Q11Is it possible to publish AgentFX content in a web page (Applet) like MADLIX?
A11Yes, with AgentFX v.3 which powers MADLIX.
Connexion & inscription | Mentions Légales | A propos de 3d-test et contact | ® 2001 2010 3d-test.com