ACRA 2012 Schedule

Monday 9:20 A.M. - 10:20 A.M.

Planning

Towards Robotic Visual & Acoustic Stealth for Outdoor Dynamic Target Tracking
Matthew Dunbabin (CSIRO) and Ashley Tews (CSIRO)
Covertly tracking mobile targets, either animal or human, in previously unmapped outdoor natural environments using off-road robotic platforms requires both visual and acoustic stealth. Whilst the use of robots for stealthy surveillance is not new, the majority only consider navigation for visual covertness. However, most fielded robotic systems have a non-negligible acoustic footprint arising from the onboard sensors, motors, computers and cooling systems, and also from the wheels interacting with the terrain during motion. This time-varying acoustic signature can jeopardise any visual covertness and needs to be addressed in any stealthy navigation strategy. In previous work, we addressed the initial concepts for acoustically masking a tracking robot's movements whilst it travels between observation locations selected to minimise its detectability by a dynamic natural target. This work extends the overall concept by examining the utility of real-time acoustic signature self-assessment and exploiting cast shadows as hiding locations for use in a combined visual and acoustic stealth framework.

Path Planning with Maximum Expected Map Deformation
Mark Whitty (University of New South Wales) and Jose Guivant (University of New South Wales)
Path planning for mobile robots during map construction requires the consideration of not only traversal costs but also belief state deformation, both of which have been studied independently. We therefore introduce a method for planning safe paths over cost maps given the Maximum Expected Deformation (MED) over the entire belief state. A process akin to convoluting the cost map is applied, whereby the cost map is dilated by a kernel whose size varies as a function of the MED at each point in the map. The dilated cost map is then used to generate the optimal policy that is guaranteed to be safe in the presence of map deformation. A traditional EKF-SLAM process was simulated and the resulting belief state used to both calculate the feasibility of a given path and the optimal policy for minimising traversal cost.

Motion Planning of a Planar 3R Manipulator Utilising Fluid Flow Trajectory Generation
Darwin Lau (University of Melbourne), Jonathan Eden (University of Melbourne) and Denny Oetomo (University of Melbourne)
A novel approach to perform motion planning on 2 or 3 degrees-of-freedom manipulators utilising a fluid motion planning model is proposed. It is demonstrated that through the introduction of transformations, the conventional trajectory planning problem in the configuration space (C-space) can be efficiently solved from the closed form solution of the fluid motion planner. The proposed motion planner is designed to be computationally efficient for real-time implementation and dynamic environments. The manipulator fluid motion planner is demonstrated on a 3-DoF 3-revolute (3R) planar manipulator example. The results show the simplicity, effectiveness and potential of the proposed novel method in manipulator motion planning.

Back to top ^

Monday 10:50 A.M. - 12:30 P.M.

Lidar Sensing

Road Terrain Type Classification based on LMS Data
Shifeng WANG (University of Technology, Sydney), Sarath Kodagoda (University of Technology, Sydney) and Lei SHI (University of Technology, Sydney)
For road vehicles, knowledge of terrain types is useful in improving passenger safety and comfort. The conventional methods are susceptible to vehicle speed variations and in this paper we present a method of using laser scanner data for speed independent road type classification. Experiments were carried out on an instrumented road vehicle (CRUISE), by manually driving on a variety of road terrain types namely Asphalt, Concrete, Grass, and Gravel road at different speeds. A laser range finder is used for the purpose. The range data is capable of capturing the structural differences while the remission values are used to observe anomalies in surface properties. Both measurements are combined and used in a Support Vector Machines Classifier to achieve an average accuracy of 95% on different road types.

A Mutual Information Approach to Automatic Calibration of Camera and Lidar in Natural Environments
Zachary J. Taylor (Rio Tinto Centre for Mine Automation) and Juan Nieto (Rio Tinto Centre for Mine Automation)
This paper presents a method for calibrating the extrinsic and intrinsic parameters of a camera and a lidar scanner. The approach uses normalized mutual information to compare an image with a lidar scan. A camera model that takes into account orientation, location and focal length is used to create a 2D lidar image, with the intensity of the pixels defined by the angle of the normals in the lidar scan. Particle swarm optimization is used to find the optimal model parameters. The method is successfully validated in a natural environment with images collected by a hyperspectral and a 3D lidar scanner.

A Novel Approach to 3D Laser Scanning
David Wood (Ocular Robotics) and Mark Bishop (Ocular Robotics)
This paper presents a novel 3D laser scanner and compares its performance to that of a range of approaches presently in-use. Each of the alternatives are discussed with respect to a selection of key performance metrics, and the new scanner is also analysed within this context. Examples are then presented where the impact of these performance metrics on real-world applications are illustrated through a simulation, and the new scanner is demonstrated to provide significant advances over existing technologies.

Omni-VISER: 3D Omni Vision-Laser Scanner
Usman Qayyum (Australian National University)
Three dimensional perception has drawn significant attention recently partly due to the success of Kinect and 3D lidar such as Velodyne. Although quite successful, they still have some limitations such as limited sensing range or high costs prohibiting them for small-scale outdoor applications. This paper presents a novel 3D scanning system by integrating a continuously rotating laser head with an omni-directional vision which offers a full 360 degree field of view with sweeping range measurements. An extrinsic calibration procedure is also proposed, in which point correspondences are being used instead of calibration object. Key benefits of the proposed system are 1) the capability to provide full 3D measurements after each revolution and 2) a fast and reliable data matching by using image features, thus relieving the computational burden involved in laser scan matching. The paper presents the experimental results of prototype hardware for 3D point cloud generation with texture and feature detection. An open source implementation of real-time point cloud generation is also made available.

Characterisation of the Victoria University Range Imaging System
Benjamin Mark Moffat Drayton (Victoria University of Wellington), Dale A. Carnegie (Victoria University of Wellington) and Adrian A. Dorrington (University of Waikato)
Indirect time of flight cameras are becoming commonplace for real time full field of view range imaging. This paper will characterise the response of the Victoria University Range Imaging System, an indirect time of flight camera. This characterisation focused on the precision, accuracy and a number of phenomena that influence these characteristics. Non-ideal properties of the sensor are explored, in particular non-linearity with respect to intensity and spatial non-uniformity. Particular attention is paid to the effect of the modulation frequency on the sensor and the effect of harmonics in the correlation waveform. The modulation frequency is not normally adjustable on commercial cameras and is therefore less well researched. Measurements are compared to theory and investigations are performed where deviations from the theory occur.

Back to top ^

Monday 1:30 P.M. - 3:10 P.M.

RGB-D Sensing

_Plane-based detection of staircases using inverse depth_
Titus Jia Jie Tang (Monash University), Wen Lik Dennis Lui (Monash University) and Wai Ho Li (Monash University)
Staircases are a common feature in urban environments. Nevertheless they often pose a navigational challenge to both visually impaired people and autonomous mobile systems. In this paper, we propose a plane-based approach to the problem of staircase detection using depth data in inverse depth coordinates. This forms a basis for our long term goal of assisting visually impaired people in navigating indoor environments. Our proposed algorithm iteratively uses Preemptive RANSAC in a segment-then-fit approach to detect steps of a staircase. This allows our algorithm to detect the presence of a staircase, identify its inclination, and model each step of the staircase as a plane model in 3D space. Experiments were conducted using a real world dataset of 121 images with manually labelled ground truth. Results shows a Type I and Type II error rate of approximately 1 and 5% respectively for the detection of staircases. Our algorithm runs at approximately 16 frames per second.

Using Kinect for monitoring warehouse order picking operations
Xingyan Li (University of Auckland), Ian Yen-Hung Chen (University of Auckland), Stephen Thomas (University of Auckland) and Bruce A. MacDonald (University of Auckland)
In this paper we address the problem of monitoring warehouse order picking using a Kinect sensor, which provides RGB and depth information. We propose a new method that uses both 2D and 3D sensory data from the Kinect sensor for recognizing cuboids in an item picking scenario. 2D local texture based features are derived from the Kinect sensor's RGB camera image data, which are used to distinguish objects with different patterns. 3D geometric information are derived from the Kinect sensor's depth data, which are useful for recognizing objects of different size. Usually, 2D object recognition method has relatively low recognition accuracy when the object is not sufficiently textured or illuminated uniformly. Under those situations, 3D data provide geometric descriptions such as planes and volume and becomes a welcome addition to the 2D method. The proposed approach is implemented and tested on a simulated warehouse item picking workstation for item recognition and process monitoring. Many box-shape items of different sizes, shapes and pattern textures are tested. The proposed approach can also be applied in many other applications.

STALKERBOT: Learning to Navigate Dynamic Human Environments by Following People
Liz Murphy (QUT) and Peter Corke (QUT)
Service robots that operate in human environments will accomplish tasks most efficiently and least disruptively if they have the capability to mimic and understand the motion patterns of the people in their workspace. This work demonstrates how a robot can create a human-centric navigational map online, and that this map reflects changes in the environment that trigger altered motion patterns of people. An RGBD sensor mounted on the robot is used to detect and track people moving through the environment. The trajectories are clustered online and organised into a tree-like probabilistic data structure which can be used to detect anomalous trajectories. A costmap is reverse-engineered from the clustered trajectories that can then inform the robot's onboard planning process. Results show that the resultant paths taken by the robot mimic expected human behaviour and can allow the robot to respond to altered human motion behaviours in the environment.

Proximity Sensing and Reactive Control for Safe Manipulation
Changmook Chun (Korea Institute of Science and Technology), Chansu Suh (Korea Institute of Science and Technology) and Sungchul Kang (Korea Institute of Science and Technology)
This paper presents a new proximity sensing algorithm, and a real-time reactive control for collision avoidance and preparation for safe collision for robots which have passive safety components such as variable stiffness joints or mechanical joint torque limiters. Although passive safety components make robot safe under accidental collision, they cannot work at or around specific poses of robots. In order to detect obstacles, we get depth images from a RGBD image sensor (Kinect(TM) for Windows(R)) and convert them into sets of point clouds. The points in the clouds are classified into 'robot', 'obstacle' and 'ignored'. The reactive controller, which shares its basic principle with that of potential field method, calculates virtual forces on the robot by the points identified as 'obstacle', and avoids collision between them. Simultaneously, the controller also prepares soft (safe) collision by changing the pose of the robot so that the passive safety components function effectively. We implement the algorithm on Safe-and-Speedy Arm I (SS-Arm I), and successfully demonstrate it with a task that has a constraint on the orientation of the end-effector.

Improving the Performance of ICP for Real-Time Applications using an Approximate Nearest Neighbour Search
Samuel Philip Marden (UNSW) and Jose Guivant (UNSW)
Matching of 3D scans collected from a Kinect camera is performed using the Iterative Closest Point algorithm with a discretised grid-like data structure that provides constant time searching and insertion for approximate nearest neighbour searches.The algorithm has been tested in a series of indoor environments, and is shown to be more accurate and significantly faster than traditional ICP-based scan matching, as well as being robust to noise and outliers.

Back to top ^

Monday 3:40 P.M. - 5:20 P.M.

Unmanned Aerial Vehicles

UAV Rendezvous: From Concept to Flight Test
Daniel Briggs Wilson (Australian Centre for Field Robotics), Ali Haydar Goktogan (Australian Centre for Field Robotics), Salah Sukkarieh (Australian Centre for Field Robotics)
The algorithms onboard Unmanned Aerial Vehicles are increasing in complexity and thus require a well established concept to flight test development process to accelerate deployment and avoid loss of aircraft. We present a modular multi-UAV algorithm development framework which combines design, simulation, performance evaluation and deployment in a single, high level graphical environment. The framework is then utilised to develop a heuristic direct search algorithm for rendezvous to leader-follower formation. Fixed-wing simulation and quadrotor flight test results validate the rendezvous algorithm.

Nonlinear Dynamic Modeling for High Performance Control of a Quadrotor
Moses Bangura (ANU) and Robert Mahony (ANU)
In this paper, we present a detailed dynamic and aerodynamic model of a quadrotor that can be used for path planning and control design of high performance, complex and aggressive manoeuvres without the need for iterative learning techniques. The accepted nonlinear dynamic quadrotor model is based on a thrust and torque model with constant thrust and torque coefficients derived from static thrust tests. Such a model is no longer valid when the vehicle undertakes dynamic manoeuvres that involve significant displacement velocities. We address this by proposing an implicit thrust model that incorporates the induced momentum effects associated with changing airflow through the rotor. The proposed model uses power as input to the system. To complete the model, we propose a hybrid dynamic model to account for the switching between different vortex ring states of the rotor.

Guidance, Navigation and Control of a Small-Scale Paramotor
Jack Umenberger (The University of Sydney)
This paper presents a guidance, navigation and control software architecture intended for a light weight, small-scale parafoil suspended motorised aircraft, known as a paramotor. The system is comprised of feedback compensated control laws for both heading and altitude tracking, and an elementary path planning logic that allows for waypoint navigation. A six degree-of-freedom mathematical model describing the aircraft dynamics is first presented, followed by the derivation and system identification of simplified, lateral and longitudinal linear models that are then verified by comparison with real flight data. The de-coupled linear models are used for controller design by classical frequency domain techniques, and simulations demonstrating the performance of the proposed controller architecture are conducted in MATLAB Simulink.

100Hz Onboard Vision for Quadrotor State Estimation
Inkyu Sa (QUT) and Peter Corke (QUT)
This paper introduces a high-speed, 100Hz, vision based state estimator that is suitable for quadrotor control in close quarters manoeuvring applications. We describe the hardware and algorithms for estimating the state of the quadrotor. Experimental results for position, velocity and yaw angle estimators are presented and compared with motion capture data. Quantitative performance comparison with state-of-the-art achievements are also presented.

Paper Plane: Towards Disposable Low-Cost Folded Cellulose-Substrate UAVs
Paul EI Pounds (University of Queensland)
Disposable folded cellulose-substrate micro-Unmanned Aerial Vehicles (UAVs) - paper planes - have the surprising potential to be effective platforms for deploying remote sensors at low cost. With inexpensive inertial sensors and circuits printed on inexpensive structural material, the cost of a mini-scale aircraft can be reduced to the point that discarding the aircraft post-mission is economical. When launched from high-altitude balloons, paper UAVs capable of navigating jet-stream winds could be guided to land anywhere on Earth with no additional power input. This paper discusses paper as a multi-functional electronic-aeromechanical material for use in disposable micro UAVs. We present a proof of concept paper aircraft with inertial sensors and elevons, and show that glide performance of the aircraft is not compromised by added mass.

Back to top ^

Tuesday 9:00 A.M. - 10:20 A.M.

SLAM

Laser-to-Radar Sensing Redundancy for Resilient Perception in Adverse Environmental Conditions
Marcos Paul Gerardo Castro (Australian Centre of Field Robotics / University of Sydney) and Thierry Peynot (Australian Centre of Field Robotics / University of Sydney)
We present an approach to promote the integrity in an autonomous perception system in challenging conditions (in the presence of dust or smoke) on outdoor unmanned ground vehicles using two different sensing modalities: a 2D laser range finder and millimetre-wave radar. A technique to determine data consistency between multiple-modalities is developed which helps to mitigate failures.Experimental results and error analysis, obtained with an unmanned ground vehicle operating in rural environments are presented to validate this approach.

Image Salience Weighting for Improving Appearance-Based Place Recognition using a Supervised Classifier System
Henry Williams (Victoria University of Wellington), Will Browne (Victoria University of Wellington) and Michael Milford (Queensland University of Technology)
Many state of the art vision-based Simultaneous Localisation And Mapping (SLAM) and place recognition systems compute the salience of visual features in their environment. As computing salience can be problematic in radically changing environments new low resolution featureless systems have been introduced, such as SeqSLAM, but these consider all of the image. In this paper, we implement a supervised classifier system (UCS) to learn the salience of image regions for place recognition by featureless systems. SeqSLAM only slightly benefits from the results of training, on the challenging real world Eynsham dataset, as it already appears to filter less useful regions of a panoramic image. However, when recognition is limited to specific image regions performance improves by more than an order of magnitude by utilising the learnt image region saliency. We then investigate whether the region salience generated from the Eynsham dataset generalizes to another car-based dataset using a perspective camera. The results suggest the general applicability of an image region salience mask for optimizing route-based navigation applications.

CAT-GRAPH+: Towards Odometry-driven Place Consolidation in Changing Environments
Stephanie M. Lowry (QUT), Gordon F. Wyeth (QUT) and Michael J. Milford (QUT)
Changing environments present a number of challenges to mobile robots, one of the most significant being mapping and localisation. This problem is particularly significant in vision-based systems where illumination and weather changes can wreak havoc with feature-based techniques. In many applications only sections of an environment undergo extreme perceptual change. Some range-based sensor mapping approaches exploit this property by combining occasional place recognition with the assumption that odometry is accurate over short periods of time. In this paper, we develop this idea in the visual domain, by using occasional vision-driven loop closures to infer loop closures in nearby locations where visual recognition is difficult due to extreme change. We demonstrate successful map creation in an environment in which change is significant but constrained to one area, where both the vanilla CAT-Graph and a Sum of Absolute Differences matcher fails, use the described techniques to link dissimilar images from matching locations, and test the robustness of the system against false inferences.

Towards Brain-based Sensor Fusion for Navigating Robots
Adam Jacobson (Queensland University of Technology) and Michael Milford (Queensland University of Technology)
Current state of the art robot mapping and navigation systems produce impressive performance under a narrow range of robot platform, sensor and environmental conditions, in contrast to animals such as rats that produce “good enough” maps that enable them to function under an incredible range of situations. In this paper we present a rat-inspired featureless sensor-fusion system that assesses the usefulness of multiple sensor modalities based on their utility and coherence for place recognition during a navigation task, without knowledge as to the type of sensor. We demonstrate the system on a Pioneer robot in indoor and outdoor environments with abrupt lighting changes. Through dynamic weighting of the sensors, the system is able to perform correct place recognition and mapping where the static sensor weighting approach fails.

Back to top ^

Tuesday 10:50 A.M. - 12:30 P.M.

Information Handling

Robocup Standard Platform League - rUNSWift 2012 Innovations
Sean C. Harris (UNSW)
Robotic competitions encourage a developmental style of research and development where large scale robotic systems are incrementally developed as a whole. This differs from the typical research approach of solving a specific problem in isolation, but is a crucial part of reaching the long-term goals of complex AI systems. This paper outlines the innovation and development of the autonomous UNSW multi-robotic system (rUNSWift) that was entered in the Standard Platform Soccer League at the International RoboCup competition in 2012. The challenge is to deliver real-time functionality within the limited resources of an on-board processor. Novel developments in 2012 include: SLAM using one-dimensional SURF features with visual-odometry as a by-product; extending foveated imaging to field-line detection; a unified field-feature sensor model; a dual-mode Kalman filter to help disambiguate the symmetric field; robot-detection data-fusing visual and sonar observations; multi-robot tracking of the ball; and omni-directional kicking. The rUNSWift system was ranked in the top three world-wide.

Monte Carlo Sampling of Non-Gaussian Proposal Distribution in Feature-Based RBPF-SLAM
Nina Marhamati (Advanced Robotics and Automated Systems, K. N. Toosi University of Technology), Hamid Taghirad (Advanced Robotics and Automated Systems, K. N. Toosi University of Technology), Kasra Khosoussi (Centre of Autonomous Systems, University of Technology, Sydney)
Particle filters are widely used in mobile robot localization and mapping, while choosing an appropriate proposal distribution plays a crucial role in the success of particle filters. The proposal distribution conditioned on the most recent observation, known as the optimal proposal distribution (OPD), increases the number of effective particles and limits the degeneracy of filter. Conventionally, the OPD is approximated by a Gaussian distribution, which can lead to failure if the true distribution is highly non-Gaussian. In this paper we propose two novel solutions to the problem of feature-based SLAM, through Monte Carlo approximation of the OPD which show superior results in terms of mean squared error (MSE) and number of effective samples. The proposed methods are capable of describing non-Gaussian OPD and dealing with nonlinear models. Simulation and experimental results in large-scale environments show that the new algorithms outperform the aforementioned conventional methods.

Novelty based Learning of Primitive Manipulation Strategies
Ben Border (Monash University) and R. Andrew Russell (Monash University)
A novelty based learning system which self generates primitive actions that can be used to form more complex manipulations is presented. In the field of intelligent robotics research, manipulation is important as it enables complex, yet delicate and precise interactions with the environment. Thus far manipulation research has only made minor progress in developing general manipulation techniques. In order to provide the flexibility to adapt to unstructured environments robotic manipulation systems will require an autonomous learning capability. Conventional machine learning algorithms such as various forms of neural networks, reinforcement learning and decision tree’s, have provided good results in previous research for relatively simple tasks. However for increasingly more diverse and complex tasks these algorithms lack adaptability and expandability and as such often yield poor performance. A relatively new field of machine learning for intelligent robotics modelled on human behaviour is evolving which has some promising potential for manipulation applications. Motivated or novelty based learning uses interesting occurrences or outcomes to direct and focus learning towards such observations of interest. In this project this concept is applied to a simplified robotic manipulation system so that primitive actions can be learned. It is intended that these primitive actions will then be combined to form more complex actions to complete a given task.

Building Large-Scale Occupancy Maps using an Infinite Mixture of Gaussian Process Experts
Soohwan Kim (The Australian National University) and Jonghyuk Kim (The Australian National University)
This paper proposes a noble method of occupancy map building for large-scale applications. Although Gaussian processes have been successfully applied to occupancy map building, it suffers from high computational complexity of O(n^3), where n is the number of training data, limiting its use for large-scale mappings. We propose to take a divide-and-conquer approach by partitioning training data into manageable subsets by combining a Dirichlet process mixture on top of a Gaussian process, which turns into an infinite mixtures of Gaussian process experts. Experimental results with simulated data show that our method produces accurate occupancy maps while maintaining the scalability.

Mutual Information Based Data Selection in Gaussian Processes for People Tracking
Zulkarnain Zainudin (University of Technology Sydney) and Sarath Kodagoda (University of Technology Sydney)
It is the general perception that models describing human motion patterns give rise to better long term tracking even with occlusions. One effective way of learning such behaviors is to use Gaussian Processes (GP). However, with the increase of the amount of training data with time, the GP becomes computationally intractable. In this work, we have proposed a Mutual Information (MI) based technique along with the Mahalanobis Distance (MD) measure to keep the most informative data while discarding the least informative data. The algorithm is tested with data collected in an office environment with a Segway robot equipped with a laser range finder. It leads to more than 80% data reduction while keeping the rms errors within the required bounds. We have also implemented a GP based Particle filter tracker for long term people tracking with occlusions. The comparison results with Extended Kalman Filter based tracker shows the superiority of the proposed approach.

Back to top ^

Tuesday 1:30 P.M. - 3:10 P.M.

Human Robot Interaction and Assistive Robotics

A Study of Feature Extraction Algorithms for Optical Flow Tracking
Navid Nourani Vatani (University of Sydney), Paulo Borges (CSIRO ICT Centre) and Jonathan Roberts (CSIRO ICT Centre)
Sparse optical flow algorithms, such as the Lucas-Kanade approach, provide more robustness to noise than dense optical flow algorithms and are the preferred approach in many scenarios. Sparse optical flow algorithms estimate the displacement for a selected number of pixels in the image. These pixels can be chosen randomly. However, pixels in regions with more variance between the neighbors will produce more reliable displacement estimates. The selected pixel locations should therefore be chosen wisely. In this study, the suitability of Harris corners, Shi-Tomasi's ``Good features to track'', SIFT and SURF interest point extractors, Canny edges, and random pixel selection for the purpose of frame-by-frame tracking using a pyramidical Lucas-Kanade algorithm is investigated. The evaluation considers the important factors of processing time, feature count, and feature trackability in indoor and outdoor scenarios using ground vehicles and unmanned areal vehicles, and for the purpose of visual odometry estimation.

How People Naturally Describe Robot Behaviour
James P. Diprose (The University of Auckland), Beryl Plimmer (The University of Auckland), Bruce A. MacDonald (The University of Auckland), and John G. Hosking (Australian National University)
Existing novice robot programming systems are complex, which ironically makes them unsuitable for novices. We have analysed 19 reports of robot projects to inform development of an ontology of critical concepts that end user robot programming environments must include. This is a fi rst step to simpler end user robot programming systems.

A cooperative approach to the design of an Operator Control Unit for a semi-autonomous grit-blasting robot
Stefan Lie (University of Technology Sydney)
Due to the diverse range of applications that robots cover today, Human Robot Interaction (HRI) interface design has become an equally diverse area. This diverse area is characterised by the different types of end users that make use of the robots. For robots to be useful to end users their needs have to be well understood by the robotics development teams. One approach that facilitates understanding the end users needs is Co-Design. This paper presents the results of a study that took a Cooperative Design (Co-Design) approach to the design and development of a robotic Operator Control Unit (OCU). The results presented here demonstrate how co-designing with end users can increase their understanding of a robotic device and reduce potential anxieties.

Skype: a communications framework for robotics
Peter Corke (QUT), Kyran Findlater (QUT) and Elizabeth Murphy (QUT)
This paper describes an architecture for robotic telepresence and teleoperation based on the well known tools ROS and Skype. We discuss how Skype can be used as a framework for robotic communication and can be integrated into a ROS/Linux framework to allow a remote user to not only interact with people near the robot, but to view maps, sensory data, robot pose and to issue commands to the robot’s navigation stack. This allows the remote user to exploit the robot’s autonomy, providing a much more convenient naviga- tion interface than simple remote joysticking.

Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning
Christopher Stanton (University of Western Sydney) and Anton Bogdanovych (University of Western Sydney)
In this paper we present and evaluate a novel method for teleoperating a humanoid robot via a full-body motion capture suit. Our method does not use any a priori analytical or mathematical modeling (e.g. forward or inverse kinematics) of the robot, and thus this approach could be applied to the calibration of any human-robot pairing, regardless of differences in physical embodiment. Our approach involves training a feed-forward neural network for each DOF on the robot to learn a mapping between sensor data from the motion capture suit and the angular position of the robot actuator to which each neural network is allocated. To collect data for the learning process, the robot leads the human operator through a series of paired synchronised movements which capture both the operator's motion capture data and the robot's actuator data. Particle swarm optimisation is then used to train each of the neural networks. The results of our experiments demonstrate that this approach provides a fast, effective and flexible method for teleoperation of a humanoid robot.

Back to top ^

Tuesday 3:40 P.M. - 5:20 P.M.

Robot Operation

jmeSim: An open source, multi platform robotics simulator
Adam Haber (University of New South Wales)
Simulation of autonomous and teleoperated robots has become a staple of the robotics research community. High quality physics and rendering, open source code, and multi platform implementability are essential qualities for simulation packages. Many simulation environments exist, each with a range of advantages and disadvantages, that address the above features to different degrees, but none achieves them all. Recently, the Robot Operating System (ROS) \cite{Quigley:2009tg} has provided researchers with a standardised platform for robotics research, and has been widely adopted. This paper presents jmeSim, an open source, multi-platform robotic simulation package, which provides excellent graphical and physical fidelity, and provides tight integration with ROS. The simulation environment is described in detail, and several demonstrations of its capabilities are presented.

Sensor Selection Based Routing for Monitoring Gaussian Processes Modeled Spatial Phenomena
Linh Van Nguyen (University of Technology, Sydney, Australia), Ravindra Ranasinghe (University of Technology, Sydney, Australia), Sarath Kodagoda (University of Technology, Sydney, Australia) and Gamini Dissanayake (University of Technology, Sydney, Australia)
This paper addresses the trade-off between the sensing quality and the energy consumption in the wireless sensor network associated with monitoring spatial phenomena. We use a non-parametric Gaussian Process to model the spatial phenomena to be monitored and simulated annealing based approximately heuristic algorithm for sensor selection. Our novel Sensor Selection based Routing (SSR) algorithm uses this model to identify the most informative nodes, which gives the root mean square prediction error less than a specified threshold, to construct the minimal energy-expended routing tree rooted at the sink. Our experiments have verified that the proposed computationally efficient SSR algorithm has significant advantages over conventional techniques.

A Novel Approach to Automated Systems Engineering on a Multi-Agent Robotics Platform using Enterprise Configuration Testing Software
Stephen Cossell (ScriptRock Inc.)
This paper presents a case study of applying an enterprise grade systems configuration management platform to a set of unmanned ground vehicles and a ground control station. Much like large scale enterprise infrastructure, modern robotics systems are comprised of many different machines communicating over a variety of media, let alone the large number of modules and applications running on each machine. Each module typically has its own configuration settings, with each individual piece of configuration information being crucial to the overall working state of the robotic system. When one configuration item is changed inadvertently, or otherwise, without an operator's knowledge, a manual and lengthy expedition through a series of configuration files and command output is usually used to diagnose the cause of the problem. This situation is exacerbated when the platform is used by a range of people for different scenarios on a regular basis. The ScriptRock platform is used by large enterprise software and infrastructure teams to encode system configuration requirements into executable documentation so the underlying environment their applications run on can be validated immediately. The application of the ScriptRock platform to a multi-agent robotic system has shown improved re-configuration times between different use cases as well as a significantly simplified troubleshooting and diagnosis process when the system is found not to be in a working state.

Dynamic Modelling and Analysis of a Vectored Thrust Aerial Vehicle
Wei Yuan (University of New South Wales), Hiranya Jayakody (University of New South Wales) and Jayantha Katupitiya (University of New South Wales)
This paper presents the dynamic modelling of a Vectored Thrust Aerial Vehicle (VTAV) powered by ducted fans, some of which can be vectored. First, a comprehensive nonlinear dynamic model of the system is developed. The model is then linearized around the hover equilibrium and the characteristics of the linearized model is analyzed. The performance of the linearized model is compared with the nonlinear model for various test conditions in order to identify the important parameters that need to be taken into consideration in developing a robust controller for the VTAV.

Towards Reduced-Order Models for Online Motion Planning and Control of UAVs in the Presence of Wind
Ashray A. Doshi (The University of Queensland), Surya P. Singh (The University of Queensland) and Adam J. Postula (The University of Queensland)
This paper describes a model reduction strategy for obtaining a computationally efficient kinematic prediction of a fixed-wing UAV performing waypoint navigation under steady wind conditions. The strategy relies on the off-line generation of time parametrized trajectory libraries for a set of flight conditions, and training the corresponding interpolating functions for reduced order information. We assume that the UAV has independent bounded control over the airspeed and altitude, and consider a 2D slice of the operating environment Results show that the reduced-order model performs satisfactorily in wind conditions in excess of 50% of the UAV's airspeed (23 knots), when compared against simulation results using a medium-fidelity 6-DOF flight dynamics model. The motivation for determining the trajectory libraries is for use in motion planning algorithms that are based on forward simulation.

Back to top ^

Wednesday 9:20 A.M. - 10:20 A.M.

Control

An Enhanced Dynamic Model for Mckiben Pneumatic Muscle Actuators
Ruiyi Tang (University of Technology, Sydney) and Dikai Liu (University of Technology, Sydney)
An enhanced dynamic force model of a type of small and soft Mckibben Pneumatic Muscle (PM) actuator is developed. This model takes the factor of external loads and a more sophisticated form of friciton into account, and is presented as a polynomial function of pressure, contraction length, contraction velocity and external load. The coefficients in this model are determined from a series of experiments with constant loads and step pressure inputs. Comparison study with other models assuming the Coulomb friciton as a constant force is conducted. The results demonstrate the solid enhancement of the presented model.

Second-order Sliding Mode Control for Offshore Container Cranes
Raja Mohd Taufika Raja Ismail (University of Technology Sydney) and Quang Ha (University of Technology Sydney)
Open-sea loading/unloading of containers is an alternative way to avoid port congestion. This process involves a mobile harbour equipped with a crane which loads/unloads containers from a large cargo ship. However, the presence of ocean wave and wind at open-sea can produce excessive sway to the hoisting ropes of the crane system. This paper proposes a second-order sliding mode control for trajectory tracking and sway suppression of an offshore container crane. From the proposed control law, the asymptotic stability of the closed-loop system is guaranteed in the Lyapunov sense. Simulation results indicate that the proposed controller can achieve high performance in trajectory tracking and swing angle suppression, despite the presence of parameter variation and system disturbances.

Locally Weighted Learning Model Predictive Control for Elastic Joint Robots
Christopher Lehnert (QUT) and Gordon Wyeth (QUT)
This paper proposes an efficient and online learning control system that uses the successful Model Predictive Control (MPC) method in a model based locally weighted learning framework. The new approach named Locally Weighted Learning Model Predictive Control (LWL-MPC) has been proposed as a solution to learn to control complex and nonlinear Elastic Joint Robots (EJR). Elastic Joint Robots are generally difficult to learn to control due to their elastic properties preventing standard model learning techniques from being used, such as learning computed torque control. This paper demonstrates the capability of LWL-MPC to perform online and incremental learning while controlling the joint positions of a real three Degree of Freedom (DoF) EJR. An experiment on a real EJR is presented and LWL-MPC is shown to successfully learn to control the system to follow two different figure 8 trajectories.

Back to top ^

Wednesday 11:10 A.M. - 12:30 P.M.

Vision - Object Recognition

Visual Sea-floor Mapping from Low Overlap Imagery using Bi-objective Bundle Adjustment and Constrained Motion
Michael Warren (Queensland University of Technology), Peter Corke (Queensland University of Technology), Oscar Pizzaro (University of Sydney), Stefan Williams (University of Sydney) and Ben Upcroft (Queensland University of Technology)
In most visual mapping applications suited to Autonomous Underwater Vehicles (AUVs), stereo visual odometry (VO) is rarely utilised as a pose estimator as imagery is typically of very low framerate due to energy conservation and data storage requirements. This adversely affects the robustness of a vision-based pose estimator and its ability to generate a smooth trajectory. This paper presents a novel VO pipeline for low-overlap imagery from an AUV that utilises constrained motion and integrates magnetometer data in a bi-objective bundle adjustment stage to achieve low-drift pose estimates over large trajectories. We analyse the performance of a standard stereo VO algorithm and compare the results to the modified vo algorithm. Results are demonstrated in a virtual environment in addition to low-overlap imagery gathered from an AUV. The modified VO algorithm shows significantly improved pose accuracy and performance over trajectories of more than 300m. In addition, dense 3D meshes generated from the visual odometry pipeline are presented as a qualitative output of the solution.

Automated species detection: An experimental approach to kelp detection from sea-floor AUV images
Michael S. Bewley (Australian Centre for Field Robotics), Bertrand Douillard (Australian Centre for Field Robotics), Navid Nourani-Vatani (Australian Centre for Field Robotics), Ariell Friedman (Australian Centre for Field Robotics), Oscar Pizarro (Australian Centre for Field Robotics) and Stefan B. Williams (Australian Centre for Field Robotics)
This paper presents an experimental study of automated species detection systems suitable for use with Autonomous Underwater Vehicle (AUV) data. The automated detection systems presented in this paper use supervised learning; a support vector machine and local image features are used to predict the presence or absence of \emph{Ecklonia Radiata} (kelp) in sea floor images. A comparison study was done using a variety of descriptors (such as local binary patterns and principal component analysis) and image scales. The performance was tested on a large data set of images from 14 AUV missions, with more than 60,000 expert labelled points. The best performing model was then analysed in greater detail, to estimate performance on generalising to unseen AUV missions, and characterise errors that may impact the utility of the species detection system for marine scientists.

Matching Objects across the Textured--Smooth Continuum
Ognjen Arandjelovic (Deakin University)
The problem of 3D object recognition is of immense practical importance, with the last decade witnessing a number of breakthroughs in the state of the art. Most of the previous work has focused on the matching of textured objects using local appearance descriptors extracted around salient image points. The recently proposed bag of boundaries method was the first to address directly the problem of matching smooth objects using boundary features. However, no previous work has attempted to achieve a holistic treatment of the problem by jointly using textural and shape features which is what we describe herein. Due to the complementarity of the two modalities, we fuse the corresponding matching scores and learn their relative weighting in a data specific manner by optimizing discriminative performance on synthetically distorted data. For the textural description of an object we adopt a representation in the form of a histogram of SIFT based visual words. Similarly the apparent shape of an object is represented by a histogram of discretized features capturing local shape. On a large database of a diverse set of objects, the proposed method is shown to significantly outperform both purely textural and purely shape based approaches for matching across viewpoint variation.

Towards an Efficient and Robuts Optic Flow Algorithm for Robotic Applications
Juan David Adarve Bermudez (Australian National University), Li Wusen (Nanjing University of Science and Technology), Robert Mahony (Australian National University) and David Austin (MadJ innovations)
Optic flow has proven itself to be a powerful sensor modality for a wide range of robotic applications. The most popular optic flow algorithms, however, still tend to be based on classical algorithms developed in the era of analog video systems. In this work we reconsider the foundations of optic flow computation for robotic applications from a modern statistical and computer hardware point of view. We assume that high speed digital camera systems and inertial image compensation are used to reduce the problem to computing sub pixel flow and consider algorithms with the potential to be implemented on FPGA computing hardware. In this paper we propose a novel flow algorithm, based on a combined least-squares and coupled generalized-total-least-squares optimization procedure that yields high quality flow estimates for acceptable computational load.

Back to top ^

Warning: Can't find topic Events/ACRA2012.News