Skip navigation
The Australian National University

Summer research scholar projects 2014

If you are think of studying towards a PhD then the summer scholar program is an ideal opportunity to try your hand at research. In addition to the exciting research opportunities on offer, there will also be a range of interesting activities, both social and research related, that you will be able to join in during your time with us.

There are two key decisions that you need to make about your summer research project - your choice of supervisor, and the project itself.

All prospective scholars will need to to contact and then negotiate with potential supervisors regarding the nature of the work and research they are interested in before they apply to the ANU.

The projects themselves are grouped and described within our research areas:

You can look at both the available projects and our supervisors in more detail below.

Projects can be tailored to suit the interests of both supervisor and student, so you can always look at developing your own project in consulation with your proposed supervisor.


Applied Signal Processing

Dense non-rigid structure from motion under complex deformation

Yuchao Dai

Non-rigid 3D reconstruction aims at endowing computers with the ability to capture and reconstruct 3D non-rigid scenes from multiple 2D image observations from different viewpoints.

This project aims at handling complex deformation in non-rigid structure from motion, thus enabling dense non-rigid reconstruction in in real world applications. Previous non-rigid structure from motion techniques manily take sparse correspondences as input and there are few methods to deal with dense correspondences. Recent work shows that extensions of sparse non-rigid structure from motion method can be applied in dense non-rigid reconstruction with simple non-rigid deformation. This project targets to fill the gap by exploiting dense non-rigid structure from motion methods under complex deformation scenarios.

Through investigating local non-linear/linear deformation representation, global adjustment and optimization method, and scalability algorithms, this project aims at alleviating major difficulties (dense correspondences, long sequences, and complex deformations) associated with current non-rigid reconstruction methods.



Back to Top


Artificial Intelligence

Compact Vehicle Roaming

Philip Kilby & Charles Gretton

In the Vehicle Routing Problem, a set of customers is visited by a fleet of vehicles so as to minimize delivery costs. Constraints such as vehicle capacity, and visiting customers within a nominated time window must be observed. This is a problem with huge practical application.

In real-life situations, fleet managers often express a preference for routes that are "compact": the area served by a vehicle should not overlap with areas served by other routes. Routes then define a non-overlapping "service area" for each driver.

This project will look at methods for constructing routes that have this feature. The work will involve ideas from artificial intelligence, optimisation, and also from computational geometry.



Future Energy Systems

Hassan Hijazi

The world we live in is continuously changing and natural disasters are unfortunately increasing. The way we handle energy, its generation and consumption will affect the world we will leave behind for future generations.

The aim of this project is to develop efficient algorithms for a range of critical energy related problems for tomorrow's smart grid. The subject includes robust smart grid design and power restoration in disaster management.

The goals of this project are to develop reliable and scalable methods for solving complex energy related problems featuring nonlinear constraints, discrete decisions, and uncertainty.

Student Gain:

  • Real impact on real world problems.
  • Expertise in Mathematical Optimisation.
  • Direct contact with industry.



Generic Reinforcement Learning Agents (GRLA)

Marcus Hutter & Peter Sunehag

Agent applications are ubiquitous in commerce and industry, and the sophistication, complexity, and importance of these applications is increasing rapidly; they include speech recognition systems, vision systems, search engines, planetary explorers, auto-pilots, spam filters, and robots [RN03]. Existing agent technology can be improved by developing systems that can automatically acquire during deployment much of the knowledge that would otherwise be required to be built in by agent designers. This greatly reduces the effort required for agent construction, and results in agents that are more adaptive and operate successfully in a wide variety of environments.

Goals of this project: Technically, the project is about a recent general approach to learning that bridges the gap between theory and practice in reinforcement learning (RL). General-purpose, intelligent, learning agents cycle through sequences of observations, actions, and rewards that are complex, uncertain, unknown, and non-Markovian [RN03].

On the other hand, RL is well-developed for small finite state Markov decision processes (MDPs) [SB98]. Extracting the right state representations out of bare observations, that is, reducing the general agent setup to the MDP framework, is an art that involves significant effort by designers. The project is to investigate (by simulations or theoretical) recent models [Hut09] that automate the reduction process and thereby significantly expand the scope of many existing RL algorithms and the agents that employ them.



Human Knowledge Compression Contest(HKCC)

Marcus Hutter

Being able to compress well is closely related to intelligence as explained below. While intelligence is a slippery concept,file sizes are hard numbers. Wikipedia is an extensive snapshot of Human Knowledge.

If you can compress the first 100MB of Wikipedia better than your predecessors, your (de)compressor likely has to be smart(er). The intention of the Human Knowledge Compression Prize [Hut06] is to encourage development of intelligent compressors/programs.

Goals of this project. Some of the following four subgoals shall be addressed:

  • Get acquainted with the current state of the art compressor and in particular with the prize winning paq8hpX series, and write a comparative survey.
  • Develop and test novel compression ideas.
  • Integrate them into one of the state-of-the-art compressors.
  • Investigate alternative performance measures that take the compressor more seriously into account (rather than only the decompressor).


Latent Support Vector Machine for Distributed Solar Prediction

Xinhua Zhang

The Distributed Solar Prediction project at ANU and NICTA has collected a large volume of data from rooftop photovoltaic (PV) panels distributed across Canberra. With the power output recorded at every 10 minutes, machine learning technologies can be applied to predict the PV output in the future 30 minutes to 2 hours, which is useful for industry in many ways. The key feature of the learning paradigm is to fuse the data from multiple sites at multiple times, in order to improve the prediction accuracy at each site.

The goal of the summer project is to address an important practical issue: frequently data loggers break down and we are left with missing data. This problem is exacerbated when our model is based on multiple sites. A natural solution in machine learning is to use latent support vector machines, which learn the forecaster concurrently with the inference of the missing data. It can also be easily scaled up to very large datasets.



Machine Learning in the Cloud

Mark Reid

The Protocols and Structures for Inference (PSI) project (see http://psikit.net ) is consists of a specification and prototype implementation for presenting machine learning data, learners, and predictors as RESTful web services. The aim of the specification is to make it easier for people to make use of machine learning tools to analyse their data and the allow interoperability between machine learning components over the web.

The aim of this project is to build more PSI-compatible services by writing adaptors for existing algorithms in machine learning toolkits such as scikit-learn, Shogun, and Orange or by wrapping existing ML services such as Google Predict, BigML, and wise.io. The newly developed services will be deployed on the Amazon Web Services platform (using funds from an Amazon in Education grant). A student completing this project will gain experience in both web service development and applied machine learning.



Machine understanding of images

Stephen Gould

Machine understanding of images is one of the long term goals of AI. In this project you will work on advanced machine learning algorithms for scene understanding.

Given a set of training images annotated with information about what objects and background regions are present, the algorithms learn to recognise these objects and background regions in new images.

Students should have a background in machine learning and strong C/C++ or Matlab programming skills.



Mathematical Foundations of Artificial Intelligence (MFAI)

Marcus Hutter & Peter Sunehag

The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions.

In a series of papers culminating in book [Hut05], an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. The model is actually quite elegant and can be defined in a single line.

Goals of this project: The fundamentals of UAI are already laid out, but there are literally hundreds of fundamental theoretical/mathematical open questions [Hut05,Hut09] in this approach that have not yet been answered.


On the Foundations of Inductive Reasoning (FIR)

Marcus Hutter

Humans and many other intelligent systems (have to) learn from experience, build models of the environment from the acquired knowledge, and use these models for prediction. In philosophy this is called inductive inference, in statistics it is called estimation and prediction, and in computer science it is addressed by machine learning.

The problem of how we (should) do inductive inference is of utmost importance in science and beyond. There are many apparently open problems regarding induction, the confirmation problem (Black raven paradox), the zero p(oste)rior problem, reparametrization invariance, the old-evidence and updating problems, to mention just a few.

Solomonoff's theory of universal induction based on Occam's and Epicurus' principles, Bayesian probability theory, and Turing's universal machine [Hut05], presents a theoretical solution [Hut07].

Goals of this project.

  • Elaborate on some of the solutions presented in[Hut07]
  • Present them in a generally accessible form, illustrating them with lots of examples.
  • Address other open induction problem.


Optimal path-finding in octile grids

Alban Grastien

Given a starting point and a target point on a map equipped with obstacle, the goal is to find the shortest path that links these two points. Eight moves are available, in straight (for instance, heading north or east) or diagonal (for instance south-east) direction.

This research project is concerned with optimal path-finding in octile grids.

A well-known solution to this problem is to use A* but, as it turns out, it is possible to develop specialised algorithms that perform much faster than that. We developed an algorithm called JPS [1] that eliminates (on-line) most of the symmetries in the map and outperform significantly A*. More recently, we developed an extension of JPS called JPS+ [2] that uses preprocessing to improve the performance even further. JPS+ indeed does runs faster than JPS, but this comes at a price of flexibility since any change in the map requires to update the structure built at preprocess time.

JPS+'s preprocessing time is quite fast, but we would like to be able to update the structure without performing the full precomputation. The goal of this project is to identify what part of the precomputed structure needs to be updated, and to check the cost of repairing the structure.



Planning, problem solving and acting.

Patrik Haslum

Planning problems come in many disguises, from control airport ground traffic, elevators, or high-performance printers, to computing genome similarity, discovering faults in protocols or data streams, or even generating a storyline.

The aim of planning research (as in many other branches of AI) is to construct domain-independent ("universal") solutions for this kind of problem. That is, rather than solving each application problem individually, a general AI planning system should be able to solve any one of them, provided a formal specification of the problem as input.

A list of some current projects is available at http://cecs.anu.edu.au.virtual.anu.edu.au/projects/500. Please contact me if you would like to know more about any of them.



Universal Artificial Intelligence (UAI)

Marcus Hutter & Peter Sunehag

The dream of creating artificial devices that reach or outperform human intelligence is an old one. Most AI research is bottom-up, extending existing ideas and algorithms beyond their limited domain of applicability. The information-theoretic top-down approach (UAI) pursued in [Hut05] justifies, formalizes, investigates, and approximates the core of intelligence: the ability to succeed in a wide range of environments [LH07]. All other properties are emergent.

Recently, effective approximations of UAI have been derived and experimentally investigated [VNHUS11]. This practical breakthrough has resulted in some impressive applications. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, it is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error.

These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive.

Goals of this project: The theoretical [Hut05],philosophical [LH07], and experimental [VNHUS11] foundations of UAI are already laid out, but plenty remains to be done to solve the AI problem in practice. The complexity of the open problems ranges from suitable-or-short-projects to full PhD theses and beyond.


Back to Top

Computer Systems

Accelerating Scientific Applications using Graphics Processing Units and other Devices

Peter Strazdins

Scientific applications ranging from simple advection solvers to global atmosphere models may benefit greatly from acceleration by devices such as GPUs.

This project will investigate the issues and techniques required on applications of interest.



Enhancing Concurrent Program Development Tools and Methodology.

Peter Strazdins

Concurrent program development can be made rigorous by creating a machine-checkable models of the underlying concurrent system. Once the model is analyzed and passes required correctness properties (e.g. freedom from deadlock), methodologies exist for deriving an implementation of the model (the desired concurrent program). However it is possible that errors can be introduced in this stage.

This project will investigate how to address this issue, namely by feeding the event trace back into the model checker for consistency. This will involve extending the (Java-based) Labelled Transition System Analyzer tool developed at Imperial College London for this purpose..



Inter-block Synchronisation on GPGPUs without any inter-block communication

Eric McCreath

General-purpose computation on graphics processing units (GPGPU) normally involve the CPU launching 1000s of threads on the GPU. These threads are grouped into blocks, now one can easily have barrier synchronisation between the threads within the same block, however, generally to achieve barrier synchronisation across all the threads one would return to the CPU.

Xiao et al. developed an approach that used shared memory to enable barrier synchronisation across blocks. This project will explore how one can achieve synchronisation without using shared memory but by the careful timing and ordering the shared memory reads and writes that necessitates the synchronisation. This project will explore the viability, performance, and limitations of such an approach.



Large 3D Ultrasound Simulation on a GPGPU using the Westervelt Equation

Eric McCreath

The simulation of ultrasound signals is important in fields including ultrasound system design and delivery of therapeutic ultrasound. Ultrasound simulation is a computationally intensive task both from the perspective of the number of floating point operations required and memory bandwidth.

This project involves implementing and evaluating a 3D ultrasound simulation using a finite-difference time-domain (FDTD) approach on a GPGPU with a focus on understanding the bottle necks in the implementation and exploring the limitations in the size of the simulation.



Unmanned aerial vehicle research

Uwe Zimmer

In the context of our collection of UAVs (which range from high-speed fixed-wings to 1.5 m rotor-disk helicopters) open questions can be addressed according to your interests and abilities.

Projects in the fields of sensor-processing, high-integrity programming, real-time programming, energy management, energy harvesting (especially solar and gliding techniques), telemetry, remote piloting, autopilots and other related areas can be set up by coordinating with us.

You should bring the enthusiasm to prepare and run field experiments and should also come with a solid understanding of basic physics (you will be handling airplanes or helicopters) and with robust programming skills. Contact me if this sounds like fun to you and we will formulate a set of experiments which fit your competencies and interests.



Using GPUs for connected component labeling

Eric McCreath

Connected component labeling (CCL) involves grouping together connected components and then giving them a unique label. CCL is useful in a number of applications such as image processing. There are a variety of approaches that can be used to implement CCL on a GPGPU (such as a union-find type approach).

This project will explore current approaches, their limitations, and how they can be improved on.



Back to Top

Computer Vision and Robotics

Object and material recognition with minimal spatio-spectral features.

Cong Phuoc Huynh

In this project, we aim to develop an algorithm for recognising objects and their material compositions in multispectral and hyperspectral images. While exploiting the rich representation of this data, recognition methods are often challenged by the potentially high dimensional feature space.

The work will leverage the rich combination of spatial and spectral features in an image for the recognition task. This involves finding the most discriminative feature combination for the classification problem at hand. In addition, it is desirable to reduce the dimension of the feature space to the minimum to avoid long computational time and large memory requirements.

The goals of the project involve:

  • The design of a feature selection algorithm that yields the shortest combination of highly discriminative features.
  • An implementation of the algorithm that ideally operates in real-time on real-world imagery.


  • Back to Top

    Information and Human Centered Computing

    Visualisation and Livecoding

    Henry Gardner & Ben Swift

    Livecoding is an arts practice built around computer music and multimedia.

    We have developed a programming system which enables not only artistic livecoding but also live programming of cyber-physical systems such as robots, physics simulations and parallel computers. We have also just completed a study of automatically generated visualisations to accompany livecoding arts performance.

    This summer project will extend this work and perhaps take it in the directions of live parallel computing.



    Live steering of parallel simulation codes

    Henry Gardner & Ben Swift

    This project will look at run-time load balancing and optimisation of scientific simulations running on parallel computing architectures.

    Extempore is a programming environment for live programming.xtlang, a LLVM-JIT-compiled programming language hosted by the Extempore compiler, supports toll-free linking and calling into shared C libraries (.so, .dylib or .dll depending on platform). This allows Extempore to use many open-source data analysis and numeric computation libraries, e.g. FANN for neural networks or libsvm for support vector machines.

    Extempore's support for real-time Just In Time compilation (and re-compilation) and interaction by the programmer in a 'live programming' paradigm allows for the possibility of controlling these libraries at a high level and orchestrating multiple concurrent analyses of the same data, making decisions based on which ones were proving more fruitful. This project would involve:

    • prototyping a set of simulations on parallel computers in the School of Computer Science under real time control of Extempore.
    • implementing library/tooling support in Extempore to provide a programmer with appropriate feedback and control of a simulation.

      Note: This project is planned but not finalised to run in semester 2 of 2014. Please contact the advertised supervisors for further details.



    Back to Top

    Logic & Computation

    Learning to Scale up Entity Resolution

    Qing Wang

    Entity resolution is a long-standing challenge in many areas of computer science. To deal with large data sets, many real-life applications require scalable and high-quality entity resolution techniques.

    This project is to investigate machine learning algorithms that actively learn the dynamics of their environment for supporting scalable and high performance entity resolution.



    Back to Top

    Materials and Manufacturing

    Analysis of imperfections in glassy metals with a view to shear band formation

    Zbigniew Stachurski

    Contact Zbigniew Stachurski.



    Electrospinning of iron-based nanomaterials for water-splitting applications

    Adrian Lowe & Wojciech Lipinski.

    .

    Despite several false starts – there is still a strong possibility that water-splitting to generate renewable fuels can be a competitive technology in terms of cost and efficiency. One approach is to utilize iron-based ceramics of controlled porosity and thermal stability as a catalyst for high-temperature splitting. However – current materials possess neither the porosity profile or the thermal stability and so new processing regimes need to be investigated.

    One such regime is electrospinning. Electrospinning is a very simple, cheap upscalable technique that uses basic precursor materials to spin nano-dimensioned fibres with unique structural characteristics. Basically, a sol gel is created consisting of an inorganic precursor, simple polymer and a solvent and this is fed through a high voltage electrical field via a syringe onto a collector mechanism. The resultant polymer fibres are then baked under predefined conditions to form ceramic oxide nanostructures that are either fibrous or particulate. The ANU Energy Nanomaterials Group has many years experience in the successful use of this technique to produce ceramic oxide fibres ranging from titania for photocatalytic applications through to vanadia for batteries and cobaltites for thermoelectric devices. It is hypothesised that electrospinning of iron-based ceramic nanostructures (both pure and doped) could lead to better porosity and better thermal stability than current structures such that water-splitting applications could be possible.

    This project will involve the creation of iron oxide nanostructured fibres from appropriate precursor systems which will then be characterized through X-ray diffraction, electron microscopy, porosity measurements and thermal stability tests. The student will also be expected to liaise with the ANU Solar Thermal Group to see how these materials could be utilized in a hybrid solar thermal water-splitting system.



    Back to Top

    Solar Thermal Group

    Assessment of optical error budgets for heliostats

    Joe Coventry

    Design of a heliostat for a solar power tower system involves many trade-offs regarding accuracy of subcomponents vs cost.

    This project will examine the various sources of optical error for a heliostat (mirror facet shape error, facet canting error, support structure accuracy, actuation error, calibration error) to help set optical error budgets for heliostat subsystems.

    Methods of analysis may include statistical methods, that assume normal distribution functions of optical error (e.g. method of Bendt and Rabl), and/or deterministic methods using ray tracing. The total optical error budget for a heliostat will be determined such that the heliostat field is designed for operation of a high efficiency, high temperature solar receiver.

    The aim of the project is to provide the heliostat designer with cost functions for each heliostat subsystem that contributes to optical error, so that rational decisions can be made with regard to the trade-off between accuracy and cost.



    Non-imaging optics for high-temperature solar applications

    Wojciech Lipinski

    Compound parabolic concentrators are used as secondary concentrators in high-temperature solar thermal systems.

    In this study,existing designs will be reviewed and their optical performance will be compared.New concepts will also be explored using the Monte Carlo ray-tracing technique.



    Solar fuels via metal-oxide redox cycles

    Wojciech Lipinski

    Production of hydrogen and carbon monoxide from water and carbon dioxide via high-temperature metal oxide based redox cycles are investigated for improved process efficiency and lower fuel cost. The focus is on development of novel redox materials and their implementation in solar reactors.

    In this study, routes to efficient production of solar fuels using perovskites will be investigated theoretically.



    Back to Top

    Systems and Control

    Localization Algorithms and Systems

    Changbin (Brad) Yu & Ben Nizette.

    To be advised.



    Rotary Aerial Robotics

    Changbin (Brad) Yu, Edwin Davis & Qingchen Liu

    To be advised.



    Back to Top

    Updated:  1 August 2014 / Responsible Officer:  JavaScript must be enabled to display this email address. / Page Contact:  JavaScript must be enabled to display this email address.