Paper Abstracts

Long Papers

Graham Jenson, Jens Dietrich and Hans Guesgen, "Toward Optimisation of Dependency Resolution for Component Based Systems"

Software components are encapsulated units of execution with declared information of their requirements and capabilities. Using this information, dependency resolution calculates component based systems where all requirements are met. Due to the facts that multiple versions of the same component can be available, and different vendors can offer the same functionality fulling a requirement, resolution can have a potentially large amount of possible solutions. When many possible solutions exist, identifying the optimal one is difficult.

In this study we discuss why optimisation for dependency resolution is desirable, and what is lacking in its current implementations. Then we empirically show that although resolution has a potentially large search space, making optimisation impractical, it can be restricted to limit the amount of possible solutions by adding constraints based on desirable properties. Our main contributions in this study are a dependency resolution de nition and an empirical investigation to show that optimisation of dependency resolution in component based systems is feasible.


Michael Walmsley, "Automatic Adaptation of Dynamic Second Language Reading Texts"

Research in second language (L2) acquisition suggests extensive reading (ER) as an effective L2 learning strategy. ER involves learners reading lots of easy L2 text. For most languages reading material that is both easy and interesting is not available, due to the expense needed to create it. This project is developing an approach for automatically adapting authentic text for computer-assisted ER. ER text will be tailored to the interests, goals and ability of individual learners.


Ali Akhtarzada, "A Flexible Multi-Dimensional Recommendation System"

Information overload and an abundance of choices lead to situations where selecting one option becomes difficult or a guessing game. The framework proposed aims to alleviate this problem by creating a decentralized consensus-based decision making system that can work over any grouping of entities, each defined by a set of value dimensions. The approach uses a new entity definition paradigm for the basis of a novel recommendation system. The datasets that would be required are not available because the paradigm is unique, so a sandbox was designed to meet the requirements based on the user behaviour modelling. The paradigm allows for a set of entities to be recommended based on the rankings and value dimensions of seemingly unrelated entities.


David Milne, "A link-based visual search engine for Wikipedia"

This paper introduces Hopara, a new search engine that aims to make Wikipedia easier to explore. It works on top of the encyclopedia's existing link structure, abstracting away from document content and allowing users to navigate the resource at a higher level. It utilizes semantic relatedness measures to emphasize articles and connections that are most likely to be of interest, visualization to expose the structure of how the available information is organized, and lightweight information extraction to explain itself.

Hopara is evaluated in a formal user study, where it is compared against a baseline system that provides the same functionality without visualization, and the incumbent Wikipedia search interface. Although objective performance measures are inconclusive, subjective preference for the new search engine is very high. All but one of twelve participants preferred the new system over Wikipedia, despite the latter's familiarity. The visualization component was a deciding factor: all but two participants chose it over the non-visual system.


Yun Jing and Hank Wolfe, "Chosen-Ciphertext Secure Non-Interactive Threshold HIB-KEM Without Random Oracles"

We present the first non-interactive threshold hierarchical identity-based key encapsulation mechanism (THIB-KEM) that is equipped with threshold private key generation and decapsulation. The THIB-KEM is selective-identity chosenciphertext (CCA2) secure based on the Decision Bilinear Diffie-Hellman (DBDH) assumption without the random oracle model.


Veronica Liesaputra, "Finding information in a book"

Information has no value unless it is accessible. With physical books, most people rely on the table of contents and subject index to find what they want. But what if they are reading a book in a digital library and have access to a full-text search tool?

The paper describes a search interface to Realistic Books, and investigates the influence of document format and search result presentation on information finding. I compare searching in Realistic Books with searching in HTML and PDF files, and with physical books.


Joseph Hobbs, Danver Braganza and Gillian Dobbie "Record, Mix, Play, Share"

The ways in which we use the World Wide Web have been evolving rapidly. Recently websites which facilitate the creation and distribution of user content have experienced huge success. Browser technologies have also been evolving and allowing powerful applications to be developed using the Web as a platform. The Web is now starting to enter domains traditionally restricted to the desktop, such as digital music creation. This paper discusses the design and implementation of Mixa: a web application which supports the creation of music entirely from the browser. Users are able to record audio clips, compose them to create songs, listen to a constantly growing body of music and share their experiences with their friends. This paper will cover many contemporary web technologies which are used to implement a working prototype for Mixa. We hope the work discussed will form the basis of an ongoing research project.


Yun Jing and Hank Wolfe, "Improved Efficiency for Chosen-Ciphertext Secure HIB-KEM Built Using Tags"

We present a very efficient hierarchical identity-based key encapsulation mechanism (HIB-KEM) on the basis of a KEM (BMW-KEM). The resulting HIB-KEM, same as the BMW-KEM, is selective-identity chosen-ciphertext (CCA2) secure based on the Decision Bilinear Diffie-Hellman (DBDH) assumption without the random oracle model, but having an improved efficiency. The efficiency is achieved by incorporating into the BMW-KEM a novel technique developed by Abe et al., which enables us to replace almost all the quite expensive pairing-based verifications for the validity of a key encapsulation with only one message authentication code (MAC)-based verification. A MAC can be quickly constructed with the help of fast hash functions, such as MD5 or SHA-1, and the resulting overhead is trivial.


Qiao Ma, "The Effectiveness of Requirements Prioritization Techniques for a Medium to Large Number of Requirements: A Systematic Literature Review"

In software system development, requirements prioritization helps people to discover the most desirable requirements. Previous researches indicate that many requirements prioritization techniques have constraints on medium to large numbers of requirements. This research uses a Systematic Literature Review to investigate the strength of evidence for the effectiveness of different requirements prioritization techniques for medium to large numbers of requirements. A Systematic Literature Review investigates research questions through identifying, evaluating and interpreting all relevant studies. After conducting the Systematic Literature Review, it is found that the strength of evidence for effectiveness is weak for most prioritization techniques for large numbers of requirements. More studies on prioritization techniques for large numbers of requirements are needed. Stronger evidence presented for prioritization techniques for medium sized numbers of requirements shows the techniques are more mature. However, all the studies in the medium-size category use a subjective measure of improvement based on the users' perceptions of level of improvement. It seems that the evaluations are still not strong for these studies.


Alastair Abbott , "De-quantisation in Quantum Computing: An Overview and an Application to the Quantum Fourier Transform"

The quantum Fourier transform (QFT) plays an important role in many quantum algorithms such as Shor's algorithm for prime factorisation. In this paper we investigate the ability to de-quantise the QFT into an equivalently efficient classical algorithm. By working directly with the quantum algorithm and qubits instead of the corresponding circuit, we de-quantise the QFT algorithm acting on a basis state. We further explore the ability to extend the de-quantisation to arbitrary product states, and present some initial findings towards this goal. Our technique sheds light on some common misconceptions about the nature of the QFT, and highlights the linearity of quantum mechanics as the key feature which allows the QFT to be able to be computed efficiently in the general case, thus making it such a useful tool.


Mashitoh Hashim and Tadao Takaoka, "A New Algorithm for Solving the All-pairs Shortest-path Problem in O(n^2 log n) Expected Time"

A new algorithm is described, which is simpler than the previous algorithms in solving the all-pairs shortest path (APSP) problem for a weighted digraph with edge weights drawn from an endpoint independent probability distribution. The expected running time is O(n^2 log n), where n is the number of vertices in a graph. This algorithm modifies Spira's method and at the same time uses Takaoka-Moffat's and Bloniarz's ideas of improving the probability of a candidate vertex being successfully included in the solution set through scanning effort. The design of this algorithm was only possible after we developed a unified approach to the average-case analysis of existing APSP algorithms.


Craig Anslow, James Noble, Stuart Marshall and Ewan Tempero, "Visualizing the Size of the Java Standard API"

The design of software should be made up of small packages and classes. The Java Standard API is now very large since Java's beginnings, and contains over 200 packages, nearly 5800 classes, and nearly 50,000 methods. We have conducted visual software analysis on the Java Standard API using existing software visualization techniques to identify large packages and classes in the API. Our analysis has identified that there exists a number of large packages and classes in the Java Standard API which leads to possible refactoring opportunities.


Mohammed Thaher, Sandhya Samarasinghe and Don Kulasiri, "Neural Network Modelling of DNA damage detection in cells"

Neural Networks (NNs) can be used to explore how signals are sent from molecules to interact with other signalling systems in the cell. Investigating the signalling systems would increase the possibility of understanding proteins communication inside a cell. Two crucial proteins include p53 and mdm2. P53 is vital in the process of repairing DNA damage, whereas mdm2 acts as an inhibitor for p53 action in the normal cell operation. The importance of these proteins is a consequence of their role in minimizing the chances of cancer occurrence. In past research, this has been arbitrarily studied in relation to the two proteins. Moreover, the currently existing models of the two proteins lack real-time biological knowledge, which makes them inefficient to apply in real life applications. This paper presents a prototype with an added complexity level in order to result in a new outline for a theoretical model using NN. This model also takes into account the complicated genetic processes and the different systems in the human biology and their relationship with DNA damage and cell repair. The rationale behind using NNs is because of their three strengths: forecasting, modelling, and characterization of systems.


Craig Taube-Schock, "Codenet: a tool for analyzing the structure of software systems"

Development and maintenance of software systems requires effort and increased effort translates to increased cost. To minimize cost, it is desirable to develop software in such a manner that minimizes development and maintenance efforts. Unfortunately, there are no conclusive methods of developing software in this way. There is general consensus that software structure can affect its modifiability, but this relationship is not well understood. This paper proposes a research tool called codenet whose purpose is to facilitate exploratory research into structural analysis of software systems. A general theoretical basis for the design of codenet is provided followed by a detailed description of its operation. The results of this work will be used as a basis for further research into understanding the relationship between software system structure and modifiability.


David Thompson and Tim Bell, "Evaluating CS education activities in Virtual Worlds through automated monitoring"

It is possible to expose students to a range of CS concepts without requiring them to be able to program first. One approach is to provide activities in Second Life and similar virtual worlds where the students can experience part of an algorithm or other concept by navigating in the virtual world. Not only does this offer a rich range of possibilities for presenting CS concepts, but it also allows us to evaluate the effectiveness more precisely than in the physical world because of the opportunities to monitor students' actions precisely. This paper surveys a number of methods for automated data capture from virtual environments such as Second Life and OpenSim for the purpose of evaluating such experiences. We look at ways to extract data from various points in the user{simulation system, and review them with respect to a range of considerations including ease of analysis, effect on performance, difficulty of implementation, and ethical/privacy issues for researchers and educators.


Andrew Meads, Thiranjith Weerasinghe, Ian Warren, "Odin: A Mobile Service Provisioning Middleware"

With the ever-increasing capabilities in today's mobile devices, it is possible for them to move from their traditional role as service consumers to one in which they act as service providers. The act of making these mobile services available to clients is known as mobile service provisioning. In this paper, we highlight key challenges in mobile service provisioning and mobile application development. Furthermore, we review existing approaches in these areas, and give an overview of Odin, our mobile service provisioning middleware. Finally, we propose a model-driven toolkit for mobile application and service development that will generate Odinbased applications in a platform-independent manner. Such a tool differs from current work in that it will be designed to support the mobile application and service development lifecycle in its entirety, rather than focusing on one specific area. We believe that such a toolkit is necessary to promote the rapid adoption of mobile services.


Syed Muhammad Ali Shah, Jens Dietrich and Catherine McCartin, "Formalization of Architectural Refactorings"

Refactoring is the process of improving the internal software quality without affecting its external behavior. Code level refactorings are widely discussed in the literature, but there is another class of refactoring that targets the software architecture. Architectural refactorings are not widely supported with the help of tool support. These refactorings are performed manually, which is expensive as well as time consuming. Existing specification of architectural refactorings is vague and abstract, which cannot be used to automate architectural refactoring process. A certain economic benefit can be achieved if architectural refactorings are (semi-) automated. Keeping in view the need of automating architectural refactorings, this project aims to formalize a set of architectural refactorings, so that they may be (semi-) automated with the help of tool support. For this purpose we propose to develop a declarative language for refactoring. This language will be used to formalize architectural refactorings. The formalization will be achieved by splitting an architectural refactoring into smaller primitive refactorings leading to composition of refactorings. We propose to use fitness functions in order to evaluate the impact of the refactoring process, because the ultimate goal of refactorings is to improve the software quality while preserving the external behavior. These fitness functions need to improve after the refactoring process. A pragmatic approach based on testing and type checking will be used to ensure that program invariants are not violated and behavior is preserved during the transformation process.


James Bebbington and Peter Andreae, "A Planner for Qualitative Models"

The paper describes the design and implementation of a planning agent that uses a particular kind of qualitative model to construct good contingent plans that will allow the agent to achieve specified goals. The qualitative models used by the planner describe the behavior of bounded physical systems and are generated by a learning agent that learns from observing and experimenting with the systems.


Sumant Murugesh, "Delivering Computer Science Concepts at Secondary School Level"

The Ministry of Education in New Zealand has recently made a major revision to digital technologies in the curriculum and has established Digital Technologies as a standalone section in technology learning area. This change has led to the addition of "Programming and Computer Science" as one of the main strands of Digital Technologies. The changes have several challenges in terms of implementation when it comes to training teachers in the subject area and also for directing students to choose the right career path (Bell, Andreae, & Lambert, 2010). Teachers will need training in the newly added areas and access to sufficient resources to teach them. This paper explores some of the many ways of teaching Computer Science and Programming concepts at this level. This research proposes to compare certain teaching and learning resources and identify their effectiveness, both from the student and teacher perspectives. These resources are not only helpful for teachers for developing their own knowledge and confidence in the subject area but also for gathering ideas for effective teaching in these new areas.


Stephen Lean, Hans Guesgen, Inga Hunter and Kudakwashe Dube, "Computational Confidence for Decision Making in Health"

The New Zealand Health and Disability Sector must handle large volumes of information and complex information flows e.g. 1,300,000[11] referrals per annum from General Practitioners alone. Access and effective use of this information is needed for efficiency gains in this sector as a lack of appropriate information is costly, both in financial terms and in adverse outcomes for patients. However, this information must be used safely, especially in clinical decision making. Effective and safe use of information, has become a driver for this sector. For safe use, clinicians must have confidence in the veracity of the information they wish to use. This research aims to investigate what factors lead to confidence in health information. How these factors can be mapped and conceptualised into a model for confidence in health information and then a prototype system produced that implements this model and provides both a computational measure for confidence in health information and a representation of that measure. The paper given here describes a work in progress.


Martin van Zijl and Mohammad Obaid, "Towards Expressive Virtual Characters in AR Environments"

Virtual characters are computerised representations of humans used in virtual reality or augmented reality environments. The importance of expression and emotion generation in virtual characters was recently recognised by the research community. Their focus is to understand the user's behaviour and interactivity with virtual characters in different application domains. In most cases, virtual characters appear to users on desktop interfaces or in immersive virtual environments. However, little work has been done in developing and understanding the user's interaction with expressive virtual characters in Augmented Reality (AR) environments. In this paper, we propose an idea to integrate an expressive virtual character (Greta) into an AR environment, where the aim is to provide a more immersive user experience than traditional virtual character interfaces. No implementation has been undertaken yet, but a literature review shows the potential use for virtual characters in AR environments.


Diane Strode, "Coordination in Agile Software Development Projects: An Empirical Research Design"

Effective coordination is an issue of perennial interest in software development. This is because poor coordination is known to contribute to problems within software projects. Although agile software development has been investigated from numerous perspectives, how this unique class of development contributes to project coordination has not been explored in depth. Therefore a study investigating how an agile software development approach contributes to effective and flexible project coordination motivates this research-in-progress paper. Literature on agile methods and an interdisciplinary theory of coordination is reviewed and a conceptual model is presented. A research design to explore the coordination within agile development projects is described. This involves an investigation of the impact of the coordination strategy of agile projects on project coordination.


Harya Widiputra, "Building an Integrated Multi-model Framework for Multiple Time-series Prediction"

The topic of time-series prediction has been very notable among other studies, yet most studies in this field have focused more on predicting movements of a single series only, whilst prediction of multiple time-series based on patterns of interaction between the series has received very little attention.

On the other hand, findings in different studies show that given multiple time-series there might exist patterns of interaction between them, and by being able to extract, learn and model these patterns of interaction will lead to the possibility of building a model(s) that can be used to predict movement of the series.

The aim of the study is to propose different models of learning which are; (1) inductive, (2) local and (3) transductive for multiple time-series prediction. As a final result, an integrated multi-model framework for multiple time-series prediction by integrating extracted knowledge from different models is expected to be developed. Additionally, an effort to be able to reveal patterns of interactions between series will also be carried-out; trying to understand how they change across time (as new knowledge).


Ayesha Hakim, Stephen Marsland and Hans Guesgen, "A Reliable Hybrid Technique For Human Face Detection"

The progress of computer vision technology has opened new doors for interactive and friendly computer interfaces. Human face detection is an essential step of various human-related computer applications, including face recognition, emotion recognition, lip reading, and several intelligent human computer interfaces. Since it is the basic step in such applications, it must be reliable enough to support further steps. Several approaches to detecting human faces have been proposed so far, but none of them can detect faces in all different conditions such as lighting conditions; frontal, profile, tilted and rotated faces; occlusions by glasses, hijab, facial hair; and noise. We propose a more reliable hybrid approach that is able to detect human faces in multiple circumstances. Moreover, a brief, but comprehensive, review of the literature is presented that may be useful to evaluate any face detection system. Our proposed approach gives up to 97% accuracy on 600 images (both simple and complicated), which is the highest accuracy rate reported to date to our knowledge.


Waseem Ahmad and Ajit Narayanan, "Clustering Inspired by Immune System Humoral Mediated Response"

In recent years the researchers have turned to nature for the solutions to complex problems e.g. classification, clustering and optimization. This paper describes a novel clustering algorithm inspired from the humoral mediated response triggered by adaptive immune system. A novel methodology for merging similar and removing less significant clusters inspired by natural immune system is also discussed here. Novelty of the proposed algorithm is discussed in the context of existing immune system based clustering algorithms. The performance of the clustering algorithm is tested on both synthetic and real world datasets.


Yuliya Bozhko, "Towards an Institutional Lifelong Learning Environment"

Lifelong learning is the self-directed pursuit of knowledge or skills that occurs throughout one's life. The importance of lifelong learning skills has been increasingly emphasised in the workplace and public policy over the last decade. Higher education institutions recognise the importance of lifelong learning and include learning attributes in their graduate profiles.

Systems currently used in universities provide only limited features to support lifelong learning. In this paper we suggest a learner-centered e-learning environment which will provide comprehensive support for lifelong learning. This environment will be built on an institutionally focused Learning Management System and a learner focused ePortfolio system, both already used in universities. These two systems are already connected, but to adequately support lifelong learning extensions are required: students need to be in charge of their own learning progress; they need to be able to choose the environment that serves their needs best and has a smart data workflow to easily connect to their institution's environment; the approach should be streamlined for both, teachers and students.


Jonathan Rubin and Ian Watson, "Similarity-Based Retrieval & Reuse of Betting Decisions in the Game of Texas Hold’em"

In this paper we introduce our autonomous poker playing agent SARTRE and describe the memory-based approach it uses to create a betting strategy for two-player, limit Texas Hold'em. Hand histories from strong poker players are observed and encapsulated as cases which capture specific game state information. Betting decisions are generalised by retrieving and re-using solutions from previous similar situations. The similarity metrics used to retrieve similar cases are described. The performance of the system is then evaluated by conducting experiments against Sparbot and Vexbot | two strong computerised agents developed by the University of Alberta Computer Poker Research Group. The results of the 2009 IJCAI Computer Poker Competition are also presented in which a version of the SARTRE system participated.


Mohammed Thaher and Takdao Takaoka, "Efficient Algorithms for the Maximum Convex Sum Problem"

This paper presents significant contributions in relation to optimizing the maximum gain or sum in maximum subarray problem. The first achievement presents an efficient algorithm that determines the boundaries of the convex shape while having the same time complexity as other existing algorithms. Despite the presence of complicated operations, this process returns an optimised solution. The second is to generalize this algorithm to find the first maximum sum, second max sum and up to the kth maximum sum.


Anna Huang, "Combining Global Semantic Relatedness and Local Analysis for Document Clustering"

Utilizing semantic relations between concepts can help the document clustering task by connecting semantically similar topics that are expressed with different terminology in different documents. Recently an increasing amount of research has been conducted on leveraging external knowledge resources that embed such semantic relations to help finding thematically more coherent document groups. One of the main approaches for utilizing semantic information in knowledge resources is to derive a semantic relatedness measure between concepts and incorporate it into the similarity calculation between a document and another document or a group of documents in subsequent clustering. However, given a specific document collection, not all the semantic relations available from the knowledge resources are desired. This is because the given document set might have different topic domain than the knowledge resource, which is normally very general. Some relations are strengthened and some are weakened in the local documents' semantic setting. Therefore, it is necessary to adjust the global semantic to fit the local context. In this paper we investigate this inconsistency problem between global and local semantics. We propose to combine the two aspects by using correlation based analysis to find semantic relations that are both significant in the external resource and strong in local setting. We investigated the effectiveness of enriching document clustering with these strong relations, with two types of resources - Wikipedia and the Medical Subject Headings (MeSH). Empirical results on both resources showed encouraging improvements on two baseline methods.


Paul Hunkin and Tony McGregor, "Wireless Sensor Networks: A Distributed Operating Systems approach"

Wireless sensor networks and methods of developing user applications is an active research area with several unique challenges. In this paper we first present an overview of the current programming approaches. We then describe a system under development that adapts techniques from the distributed operating systems world to create a new method of programming wireless sensor networks.


Simon Ware, "Compositional Verification of Safety Properties Using Language Projection and Certainly Safe States"

Model Checking is the task of searching the state spaces of finite-state automata to see whether they satisfy certain properties of interest. In many practical applications, the state space is much larger than can possibly fit in the memory of a computer. One of the methods developed to overcome this problem and make it possible to verify large models is the so-called modular method. This paper proposes to extend this method by suggesting a simpler method for simplifying projected automata. In addittion it suggests a method for removing redundant parts of an automata called certainly safe states.


Aram Ter-Sarkissov, Steve Marsland and Barbara Holland, "The k-bit-swap: A New Genetic Algorithm Operator"

The three main operators in a Genetic Algorithm (GA) are selection, crossover and mutation, of which crossover and selection are not present in their pure form in any other heuristic or artificial intelligence tool. This article introduces a new operator, which improves GA performance on many problems. A large number of computational experiments were performed to determine the optimal set of parameters, and linear regression is used to study the effect of various GA operators and parameters on the outcome.


Brian Thorne and Raphael Grasset, "Python for Prototyping Computer Vision Applications"

Python is a popular language widely adopted by the scientific community due to its clear syntax and an extensive number of specialized packages. For image processing or computer vision development, two libraries are prominently used: NumPy/SciPy and OpenCV with a Python wrapper. In this paper, we present a comparative evaluation of both libraries, assessing their performance and their usability. We also investigate the performance of OpenCV when accessed through a python wrapper versus directly using the native C implementation.

Short Papers

Samuel Sarjant, "Cross-Entropy Relational Reinforcement Learning"

This paper presents a policy-based algorithm for a learning agent utilising relational reinforcement learning to solve problems. Information is passed to the agent in the form of First-order observations on an environment and the agent attempts to take actions that maximise the numerical reward received from those actions. Because the agent may be presented with an environment of arbitrary size, recording an explicit state-action expected reward table is inefficient so a policy-driven approach is used. Furthermore, complex environments can prove insurmountable for a singular agent, so a proposed modular solution is presented.


Tania Roblot , "An Algorithm Computing the Finite-State Complexity"

In this paper, we present the first algorithm for computing the finite-state complexity. We first briefly introduce finite transducers and the finite-state complexity derived from them. The finite-state complexity is a computable analogue of the prefix complexity in Algorithmic Information Theory. Here, we focus on how to compute this complexity and present both the algorithm at hand and some computed results.


Thomas Young, "Applying "EvoDevo" to Evolutionary Algorithms"

This short paper introduces the author's PhD research program in the application of Evolutionary Development ("EvoDevo") to Evolutionary Algorithms, particularly in the effect of development upon a fitness landscape.


Mahsa Mohaghegh, "A Statistical Approach to English-Persian Machine Translation"

Statistical Machine Translation has successfully been used for translation between many language pairs contributing to its popularity in recent years. It has however not been used for the English/Persian language pair. This paper presents the first such attempt and describes the problems faced in creating a corpus and building a base line system. Our experience with construction of a parallel corpus during this ongoing study and the problems encountered especially with the process of alignment are discussed in this paper. The prototype and its evaluation using the BiLingual Evaluation Understudy (BLEU) is briefly described and results are analyzed. In the final part of the paper, conclusions are drawn and work planned for the future is discussed.


Dominic Winkler and Andy Cockburn, "Bag and Dump: copy-and-paste across contexts"

The copy-and-paste paradigm is a fundamental operation in graphical user interfaces. However, existing copy-and-paste techniques are limited when copying across contexts such as different folders. In this paper we introduce a concept for a new copy-and-paste technique that allows the user to copy-and-paste multiple objects across different contexts. This technique not only significantly reduces the mouse movement from a path of 2n-1 nodes to n, it also gives constant visual feedback. The concept introduced in this paper will be extended, implemented and evaluated in future work.


Jevon Wright, "The Development of a Modelling Language for Rich Internet Applications"

The relatively recent innovation of Rich Internet Applications (RIAs) has introduced important usability and reliability improvements to server-side web applications; however, no existing modelling language for web applications can model the new concepts involved. Our proposed Internet Application Modelling Language aims to provide a simple domain-specific language for RIAs. In this paper, we discuss the ongoing development of both a meta-model for this language and its accompanying CASE tool, which aims to provide a rich modelling environment for the design, development and deployment of RIAs.


Kshitij Dhoble, "Multi-example Image Retrieval on Active mode Incremental NDA Learning"

This paper presents a novel application of multi-example image retrieval based on active mode incremental Nonparametric Discriminant Analysis learning (MeIR). Traditional methods conduct query using only one image as the template for similarity comparison while retrieving. Alternatively in this work, the template is replaced by a sort of discriminative differences amongst multiple of example images. The discriminative differences is extracted out by Nonparametric Discriminant Analysis(NDA) from the given set of example images and is used along with correlation based similarity metrics.The extracted image samples are incrementally and iteratively learned in order to obtain next set of correlated images from the image dataset. The MeIR retrieves the images by exploiting the discriminative features present in the multi-example query represented by the discriminant template. The performance of our proposed method is evaluated on face and object category image datasets, with a comparison to traditional single-example image retrieval. The results the empirical investigation show that our proposed multi-example image retrieval (MeIR) can be used for efficient recognition and retrieval task.


Stefan Schliebs, "Heterogeneous Probabilistic Models for Optimization and Modelling of Evolving Spiking Neural Networks"

This paper summarizes recent developments on the quantum-inspired evolving spiking neural network (QiSNN). QiSNN is an integrated connectionist system, in which the features and parameters of an evolving spiking neural network are optimized together with the use of a quantum-inspired evolutionary algorithm. The feature selection and classification performance of QiSNN was experimentally investigated in numerous recent studies involving both synthetic benchmark problems and real-world data sets.


Hamwira Sakti Yaacob, Brendan McCane and Michael Albert. "Ensemble Classifiers from Functionally Complete Algebra"

Classification is a task that categorises an unlabeled instance based on information learned from existing instances. In this experiment, the potential of terms derived from a functionally complete algebra to be considered as weak classifiers is explored. An AdaBoost learning technique was adapted to produce an ensemble strong classi er. The performances of single weak classifiers and an ensemble classifiers are also reported.


Chiang Tay, Mohammad Obaid and Ramakrishnan Mukundan. "Enhancing Synthesised Facial Images"

This paper presents several ideas to enhance the previous research on generating expressive facial images from the quadratic representations of facial expressions. The proposed ideas are focused on improving the quality and performance of the generated facial images. In particular, we focus on enhancing the synthesis of the generated artificial facial features. A full user evaluation is also considered to validate the proposed ideas.


Akram Abdulkarim Sabbah Darwish, "Word Form Normalization for Text Mining in Highly Inflectional Languages"

In this paper we argue that a comprehensive study to determine the optimal word form normalization approach to be applied in text mining in a highly inflectional language is very important. The importance role of the word form normalization technique and its utilization in the text mining technology in highly inflectional languages with some empirical studies for different inflectional languages like Swedish, Finnish and Arabic is shown in this paper. All of these empirical studies show the significant impact of word form normalization methods such as stemming and lemmatization for different text mining techniques and utilize these methods to reduce dimensionality and improve the accuracy and save the processing time for different text mining tasks like text categorization and clustering.


Elin Eliana Abdul Rahim, Keith Unsworth, Alan McKinnon , Andreas Duenser, Mark Billinghurst, Peter Gostomski and Ken Morison, "Navigation Issues in the Development of a Virtual Chemical Processing Plant"

We are proposing a desktop Virtual Reality (VR) application for Chemical and Process Engineering education that allows students to explore a virtual processing plant on a standard computer. However, there are certain navigation issues in the proposed application due to the complexity of the physical structure of the processing plant and the distance between the nodes within the virtual environment (VE). The first phase of this research concentrates on addressing these navigation issues for the proposed application. This paper discusses design issues related to navigation in a desktop VR application for a complex processing plant and considers what further research is required.


Bernard Otinpong, Alan Mckinnon and Stuart Charters, "Visualization and Public Participation"

Changes to the natural landscape are often met with fierce resistance from various stakeholders. This is often because stakeholders fail to get an overall picture on how changes to their landscape will affect them. The principle of community engagement holds that those who are affected by a decision have a right to be involved in the decision-making process. Visualization could be used as a tool to facilitate community engagement in decision making. To ensure the effectiveness of the participatory approaches, they could be measured against the standards of the International Association of Public Participation Spectrum of Public Participation (IAP2). This paper evaluates the effectiveness of visualization in public participatory approaches to decision making using the standards of the IAP2 in an urban and rural scenario and finds that the there are no guidelines on the effectiveness of visualization using public participatory approaches.


Siva Dorairaj, James Noble and Petra Malik, "Exploring Distributed Agile Projects: A Grounded Theory Perspective"

The success of Agile projects encourages practitioners to incorporate Agile methods in distributed projects. Most of the Agile methods, however, were developed to work successfully for collocated teams. This paper outlines the proposed research on distributed Agile projects. We exploring distributed Agile projects using grounded theory methodology. We aim to understand the challenges faced by Agile practitioners in distributed projects, identify the key success factors in distributed Agile projects, and collate the strategies adopted by Agile practitioners to manage distributed Agile projects.


Doug Hunt, "Real World Context Data Collection - How many errors can I make?"

Field studies that set out to collect real world data for context recognition can be more difficult to organise, manage and undertake compared with laboratory data collection. However, conclusions drawn from the analysis of real world context data collected in the field can be more robust than conclusions drawn from context data collected in artificial environments.

This paper describes the issues and challenges encountered during field study context data collection carried out in Sweden with horse riders during actual riding sessions, sometimes literally in a field. The data was collected for Context Recognition analysis designed to find patterns that could be consistently associated with a horse rider mounting a horse.


Richard Vidal, "Sharing Clipboards using RFID"

Sharing data with other computer users is often neither easy nor quick, even if they are physically close. After the study of traditional methods of sharing data; we created a Phidget device using RFID technology and we programmed a C# application to help users share data copied to their clipboard. We evaluated the project through a usability testing and an analytical evaluation and reached the conclusion that our solution could represent a new efficient mechanism to transfer data between users.


Le-Thu Nguyen, Richard Harris and Jusak Jusak. "Modelling of Quality of Experience for Web Traffic"

Surfing the Web is a practical task that is being used in activities that range from bank transactions to entertainment. Web technology is advancing in order to provide its users with quality and advanced features. Based on networking parameters, Internet Service Providers (ISPs) want to ensure better Quality of Service for their customers. However, a question arises concerning the assessment of this Quality of Service, since, typically it depends on an ISP's perspective, as to whether this is sufficient to infer a fair assessment of a user's experience of the service when they are the direct users of the service. Therefore it may be meaningless if an ISP guarantees that the service is good but their users are not satisfied with it. Moreover, a user's opinion, which is a subjective assessment, may not be a totally correct assessment either. In this paper we propose a new approach to measurement and analysis of Quality of Experience (QoE) where we objectively observe users' behaviour as they interact with the Web and infer their Quality of Experience. Thus our new model enables people to feedback their opinions by revealing the way in which they are interacting with the system rather than obtain an opinion via a subjective approach. This leads to the building of a measurement model that can be shown to represent a good correlation between objectively observed parameters and subjectively based parameters.


Johan Scholtz, "Digital Data Forensics - a critical study of the Investigative Examination Process - towards an Automated Digital Forensic Model"

In this paper, we describe how investigator experience and existing frameworks influences the investigation processes. We also cover the feasibility of an automated investigative process through creating global corpora of previous forensic cases.


Sagaya Amalathas, Tanja Mitrovic Mitrovic and Ravan Saravanan Saravanan, "Intelligent Tutoring System for Palm Oil Industry"

This paper discusses the design and development of an Intelligent Tutoring System (ITS) for the Palm Oil industry. The ITS would provide the benefits of one-on-one workplace training to palm oil plantation managers and employees without requiring a personal human trainer. People can be trained from any location, anytime; without additional training cost to the organizations. The ITS models the trainee's knowledge level, skills and actions by assessing the user's actions and based on the derived model adapts the delivery of knowledge to the user. Initially the ITS would be training users on effective decision making on yield improvement and soil nutrient management, two global concerns within the palm oil industry. The developed ITS will be integrated with a Management Information System (MIS) developed for the palm oil industry. This enables users to train with real life operational data and allows them to fully benefit from scenario based training using problem solving approach.


Fuad Baloch, "Re-Visualising Cyberspace: Using Quasi Objects for Spatial Definitions"

This paper proposes the addition of Cyberspace as a construct in discussions regarding the Internet, and is an initial philosophical exploration for a PhD thesis exploring governance structure for the Internet.


Nassiriah Shaari, Clare Churcher and Stuart Charters, "Customization of Web Content for Desktop and Mobile Devices"

Many adaptation methods have been proposed to adapt web pages designed for desktop computers to be adequately displayed on mobile devices' small screens. While some restructure the page content and layout, others minimize or hide certain web page items. This study proposes a customization by prioritization method to enable users to determine what and in which order items on the web pages should be displayed on their mobile devices. It allows users to rank page items based on their preference. The prototype being developed currently has a basic customization interface and uses session variables and a database to store user preferences. Comprehensive user trials to test and investigate the usability of the interface, and the management and storage options will be the future work.

This topic: Events/NZCSRSC2010 > WebHome > PaperAbstracts
Topic revision: 24 Nov 2011, christo
Contact Us | Section Map | Disclaimer | RSS feed RSS FeedBack to top ^

Valid XHTML and CSS | Built on Foswiki

Page Updated: 24 Nov 2011 by christo. © Victoria University of Wellington, New Zealand, unless otherwise stated