Carlos Garre, Domenico Mundo, Marco Gubitosa, Alessandro Toso
Mathematical Problems in Engineering, 2014. (Q2, JCR 2013)
Physical simulation is a valuable tool in many fields of engineering for the tasks of design, prototyping, and testing. General-purpose operating systems (GPOS) are designed for real-fast tasks, such as offline simulation of complex physical models that should finish as soon as possible. Interfacing hardware at a given rate (as in a hardware-in-the-loop test) requires instead maximizing time determinism, for which real-time operating systems (RTOS) are designed. In this paper, real-fast and real-time performance of RTOS and GPOS are compared when simulating models of high complexity with large time steps. This type of applications is usually present in the automotive industry and requires a good trade-off between real-fast and real-time performance. The performance of an RTOS and a GPOS is compared by running a tire model scalable on the number of degrees-of-freedom and parallel threads. The benchmark shows that the GPOS present better performance in real-fast runs but worse in real-time due to nonexplicit task switches and to the latency associated with interprocess communication (IPC) and task switch.Read publication (PDF)
Carlos Garre, Domenico Mundo, Marco Gubitosa, Alessandro Toso
SAE Technical Papers, 2014. (Q2, SJR 2013)
Real-time simulation is a valuable tool in the design and test of vehicles and vehicle parts, mainly when interfacing with hardware modules working at a given rate, as in hardware-in-the-loop testing. Real-time operating-systems (RTOS) are designed for minimizing the latency of critical operations such as interrupt dispatch, task switch or inter-process communication (IPC). General-purpose operating-systems (GPOS), instead, are designed for maximizing throughput in heavy-load systems. In complex simulations where the amount of work to do in one step is high, achieving real-time depends not only in the latency of the event starting the step, but also on the capacity of the system for computing one step in the available time. While it is demonstrated that RTOS present lower latencies than GPOS, the choice is not clear when maximizing throughput is also critical.In this paper, the performance of RTOS and GPOS running complex real-time simulations is compared, focusing on the computation of large simulation steps. GNU/Linux has been chosen as GPOS. A RTOS is chosen with a micro-benchmark comparing the major choices of Linux-based RTOS. Once chosen the systems, the simulation of a tire model is used as application case for benchmarking, comparing within 52470 different configurations (with different number of elements and threads). The benchmark measures which configurations miss a single deadline, and demonstrates that even in simulations with a high number of elements and large time steps, RTOS are a better choice due mainly to the latency associated to IPC and task switch when the simulation is parallelized.Access publisher website
Miguel A. Otaduy, Carlos Garre, Ming C. Lin
Proceedings of the IEEE, 2013. (Q1, JCR 2012)
‘Haptic rendering’ broadly refers to conveying to a user information of virtual objects or data through tactile stimuli. In this paper, we present a general framework for haptic rendering and we outline its major building blocks. Among all applications of haptic rendering, the display of contact interactions with rigid and deformable virtual models through the sense of touch has matured considerably over the last decade. In the paper, we focus on the computational aspects of haptic rendering of contacting objects, and we classify algorithms and representations successfully used in its three major subproblems: collision detection, dynamics simulation, and constrained optimization. In addition, haptic rendering is an integral part of a multimodal experience, often involving both visual and auditory display; therefore, we also discuss multimodal implications in the choice of algorithms and representations.Read publication (PDF)
Francesco I. Cosco, Carlos Garre, Fabio Bruno, Maurizio Muzzupappa, Miguel A. Otaduy
IEEE Transactions on Visualization and Computer Graphics, 2013 (Q1, JCR 2011)
Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user’s real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user’s hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user’s hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.Read publication (PDF) Watch video
Carlos Garre, Miguel A. Otaduy
Computers & Graphics, 2010 (Q2, JCR 2011)
In many haptic applications, the user interacts with the virtual environment through a rigid tool. Tool-based interaction is suitable in many applications, but the constraint of using rigid tools is not applicable to some situations, such as the use of catheters in virtual surgery, or of a rubber part in an assembly simulation. Rigid-tool-based interaction is also unable to provide force feedback regarding interaction through the human hand, due to the soft nature of human flesh. In this paper, we address some of the computational challenges of haptic interaction through deformable tools, which forms the basis for direct-hand haptic interaction. We describe a haptic rendering algorithm that enables interactive contact between deformable objects, including self-collisions and friction. This algorithm relies on a deformable tool model that combines rigid and deformable components, and we present the efficient simulation of such a model under robust implicit integration.Read publication (PDF) Watch video
Maria Cuevas, Daniel Gonzalez, Ernesto de la Rubia, Carlos Garre, Luis Molina, Arcadio Reyes, David Poirier, Lorenzo Picinali
Proceedings of AES (Audio Engineering Society) International Convention, 2017. (Yet to be published)
The EU-funded 3D Tune-In (http://www.3d-tune-in.eu/) project introduces an innovative approach using 3D sound, visuals, and gamification techniques to support people using hearing aid devices. In order to achieve a high level of realism and immersiveness within the 3D audio simulations, and to allow for the emulation (within the virtual environment) of hearing aid devices and of different typologies of hearing loss, a custom open-source C++ library (the 3D Tune-In Toolkit) has been developed. The 3DTI Toolkit integrates several novel functionalities for speaker and headphone-based sound spatialization, together with generalized hearing aid and hearing loss simulators.Accepted, waiting for publication
Daniel Gonzalez, Maria Cuevas, Carlos Garre, Luis Molina, Arcadio Reyes
Proceedings of EuroVR Conference, 2016.
Daniel Gonzalez, Maria Cuevas, Carlos Garre, Luis Molina, Arcadio Reyes
Proceedings of EuroVR Conference, 2015.Conference program
Stefano Candreva, Domenico Straface, Carlos Garre, Domenico Mundo, Laszlo Farkas, Stijn Donders, Peter Mas
Proceedings of ISMA International Conference on Noise and Vibration Engineering, 2014.
The present work focuses the attention on the sensitivity study of the equivalent mechanism (EM) conceptual modelling technique for vehicle body crashworthiness analyses. A library implementation of EM was presented by the authors in a previous work. This paper presents an extension by treating a more complex application case: a planar beam assembly that follows the approximate topology of the front side part of a vehicle. The analysis case consists of an impact against a rigid wall. A sensitivity analysis is performed w.r.t. variations in the geometry, using detailed FE simulations as a reference to validate the procedure in terms of accuracy. The sensitivity analysis covers two slight modifications in the geometry of two joints. The results show that the deformation modes of the EM model vary in agreement with the modes predicted through detailed FE simulations and that an acceptable correlation between the two simulation models is achieved in terms of rigid wall displacement and deceleration curves.Read publication (PDF)
Carlos Garre, Domenico Straface, Stefano Candreva, Domenico Mundo, Laszlo Farkas, Stijn Donders, Peter Mas
Proceedings of FISITA World Automotive Congress, 2014.
An equivalent mechanism (EM) is a computationally inexpensive concept model capable of simulating crash modes with a straightforward link to FE models. A library approach is presented to enable the use of EM in the vehicle conceptual design process for crashworthiness, based on a prototype implementation into LMS Imagine.Lab AMESim, an integrated platform for multi-domain system simulation from the concept phase onwards. The library allows easy assembly of structures and setup of test cases, performing fast simulations with geometric visualization of crash modes. The elements developed for the library are divided in beam, joint and boundary condition components. Beam and joint components are characterized through FE simulations of collapsing thin-walled structures. Boundary condition components allow clamping beams and simulate contact condition with rigid walls. Once characterized, the components are re-usable in different assemblies. The simulation of a C-shaped structure impacting against a rigid wall is presented as application case for validation of the model through comparison with the correspondent detailed FE simulation. The conceptual model replicates accurately the deformation mode of the full FE model, and the simulation runs 480 times faster. The deceleration history and final displacement of the wall are estimated by running both the detailed and the concept simulation and compared to each other. A maximum difference of 15% for the displacement and of 0.4% for the deceleration peak, which are acceptable values for the conceptual phase of automotive body design.Read publication (PDF)
Giovanni de Gaetano, Francesco I. Cosco, Carlos Garre, Carmine Maletta, Stijn Donders, Domenico Mundo
Proceedings of the 11th International Conference on Vibration Problems (ICOVP), 2013.
Sandwich structures are widely used in many technical applications, because their composition combines high rigidity and strength with a good energy absorption, keeping low weights. Their static and dynamic behaviour can be studied by performing series of experi-mental tests, which, however, are expensive and require much setting and execution times. For this reason, it is common to use Finite Element (FE) simulation models, achieving good static and dynamic accuracy. However, difficulties in defining and modifying a complex mod-el led to the development of simplified models, such as 3D equivalent models. These homoge-neous models are based on specific laws and have geometric and stiffness characteristics equivalent to those of complex models. Many efforts have been spent to obtain models resem-bling the characteristics of honeycomb structures. These models have reached accurate static prediction performance, but obtaining a good accuracy for dynamic loads is still a challenge. Concept modelling approaches proved very useful for defining equivalent reduced models, able to reduce computational resources as well as the time needed for model modifications. In this paper, a dynamic FE-based method is used to obtain a concept model of honeycomb sandwich beams, that can reproduce accurately their static and dynamic behaviours. The method consists of two steps. First, a detailed FE model of one honeycomb beam-structure is developed and validated against experimental data obtained from literature. Its natural fre-quencies are estimated by means of a modal analysis in free-free conditions. Then, the analyt-ical modal model of the beam is used to derive cross-sectional stiffness properties of the equivalent 1D concept beam from the frequencies estimated by analysing the original 3D model. The analysis of a sandwich beam with a honeycomb aluminium core is presented as an application case to assess the accuracy of the proposed method.Read publication (PDF)
Alvaro G. Perez, Gabriel Cirio, Fernando Hernandez, Carlos Garre, Miguel A. Otaduy
Proceedings of World Haptics Conference, 2013.
The command of haptic devices for rendering direct interaction with the hand requires thorough knowledge of the forces and deformations caused by contact interactions on the fingers. In this paper, we propose an algorithm to simulate nonlinear elasticity under frictional contact, with the goal of establishing a model-based strategy to command haptic devices and to render direct hand interaction. The key novelty in our algorithm is an approach to model the extremely nonlinear elasticity of finger skin and flesh using strain-limiting constraints, which are seamlessly combined with frictional contact constraints in a standard constrained dynamics solver. We show that our approach enables haptic rendering of rich and compelling deformations of the fingertip.Read publication (PDF) Watch video
Carlos Garre, Fernando Hernandez, Antonio Gracia, Miguel A. Otaduy
Proceedings of World Haptics Conference, 2011.
Operations such as object manipulation and palpation rely on the fine perception of contact forces, both in time and space. Haptic simulation of grasping, with the rendering of contact forces resulting from the manipulation of virtual objects, requires realistic yet interactive models of hand mechanics. This paper presents a model for interactive simulation of the skeletal and elastic properties of a human hand, allowing haptic grasping of virtual objects with soft finger contact. The novel aspects of the model consist of a simple technique to couple skeletal and elastic elements, an efficient dynamics solver in the presence of joints and contact constraints, and an algorithm that connects the simulation to a haptic device.Read publication (PDF) Watch video
Fernando Hernandez, Carlos Garre, Ruben Casillas, Miguel A. Otaduy
Proceedings of V Ibero-American Symposium on Computer Graphics (SIACG), 2011.
Characters, like other articulated objects and structures, are typically simulated using articulated dynamics algorithms. There are efficient linear-time algorithms for the simulation of open-chain articulated bodies, but complexity grows notably under additional constraints such as joint limits, loops or contact, or if the bodies undergo stiff joint forces. This paper presents a linear-time algorithm for the simulation of open-chain articulated bodies with joint limits and stiff joint forces. This novel algorithm uses implicit integration to simulate stiff forces in a stable manner, and avoids drift by formulating joint constraints implicitly. One additional interesting feature of the algorithm is that its practical implementation entails only small modi?cations to a popular algorithm.Read publication (PDF) Watch video
Miguel A. Otaduy, Carlos Garre, Jorge Gascón, Eder Miguel, Alvaro G. Perez, Javier S. Zurdo
Proceedings of Congreso Español de Informatica Grafica (CEIG), 2010.
Human joints, such as the shoulder, present intricate connections of anatomical elements such as bones, muscles, tendons, ligaments, and fat. The nature and arrangement of the various structures in the shoulder impose two main difficulties for interactive simulation: a large diversity of mechanical properties, ranging from hard bone to soft fat tissue, and complex contact situations. In this paper, we present a combination of representations, simulation methodology, and algorithms, which, altogether, provide the proper balance between simulation quality and performance for interactive medical applications. Unified representations for all dynamic objects and their dynamic state allow us to define coupling constraints and contact constraints in a general way. As a result, all dynamic objects can be simulated at once in a unified manner. We show the application of our algorithm to shoulder simulation in two medical settings: virtual arthroscopy and physiotherapy palpation.Read publication (PDF) Watch video
Francesco I. Cosco, Carlos Garre, Fabio Bruno, Maurizio Muzzupappa. Miguel A. Otaduy
Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR), 2009. (Core A* Conference)
Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual bjects. It requires the use of seethrough display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. However, haptic devices tend to be bulky items that appear in the field of view of the user. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, but without visual obtrusion produced by the haptic device. This mixed reality paradigm relies on the following three technical steps: tracking of the haptic device, visual deletion of the device from the real scene, and background completion using image-based models. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects in the context of a real scene.Read publication (PDF) Watch video
Carlos Garre, Miguel A. Otaduy
Proceedings of Congreso Español de Informatica Grafica (CEIG), 2009.
Most of the current haptic rendering techniques model either force-interaction through a pen-like tool or vibration-interaction on the finger tip. Such techniques are not able, nowadays, to provide force-feedback of the interaction through the human hand. In this paper, we address some of the computational challenges in computing haptic feedback forces for hand-based interaction. We describe a haptic rendering algorithm that enables interactive contact between deformable surfaces, even with self-collisions and friction. This algorithm relies on a virtual hand model that combines rigid and deformable components, and we present the efficient simulation of such model under robust implicit integration.Read publication (PDF) Watch video
Carlos Garre, Miguel A. Otaduy
Proceedings of World Haptics Conference, 2009.
The force-update-rate requirements of transparent rendering of virtual environments are in conflict with the computational cost required for computing complex interactions between deforming objects. In this paper we introduce a novel method for satisfying high force update rates with deformable objects, yet retaining the visual quality of complex deformations and interactions. The objects that are haptically manipulated may have many degrees of freedom, but haptic interaction is often implemented in practice through low-dimensional force-feedback devices. We exploit the low-dimensional domain of the interaction for devising a novel linear approximation of interaction forces that can be efficiently evaluated at force-update rates. Moreover, our linearized force model is time-implicit, which implies that it accounts for contact constraints and the internal dynamics of deforming objects. In this paper we show examples of haptic interaction in complex situations such as large deformations, collision between deformable objects (with friction), or even self-collision.Read publication (PDF) Watch video