Talks

Aaron Leese
Realtime Bandwidth Limiting for Better Audio

Last year I came across a paper on what are called minimum phase, band limited, steps (exciting, I know). The paper contended that you could build digital, alias free waves in real-time, allowing for all sorts of options not otherwise available with other (for example wavetable or additive) synthesis techniques.

What's more you can also use this technique to handle all sorts of nonlinearities. You could, for example, skip around in an audio file without worrying about adding a crossfade, create flangers or chorus effects that modulate with sawtooth instead of sine waves, catch audio errors and correct in real time.

The purpose of this talk is to share not just the implementation of this technique (with code examples), but also to discuss the far ranging applications, the limitations, and some alternative methods.

Adam Wilson
Cross-platform mobile development with React Native and JUCE

Building on last year’s talk "Mobile development with JUCE and native APIs" which demonstrated using Android and iOS API’s with JUCE, this talk will look at utilising React Native to manage native views together with JUCE.

React Native is a cross-platform mobile development framework developed by Facebook that allows native mobile applications to be written in Javascript, and share the majority of the code across platforms. That means we don't have to write our UI logic for each platform, and we can take advantage of the growing library of React Native components via npm. Combining JUCE’s optimised audio and graphics capabilities with React Native allows us to leverage the best of both worlds: JUCE’s audio processing, MIDI, graphics and network classes with React Native’s rapid cross-platform mobile UI development.

The talk will cover the architecture of React Native and how JUCE and JUCE Components can be used within a React Native app both on iOS and Android, including a live demo. The advantages and disadvantages of the approach will be presented. A tutorial including source code will be provided ahead of the conference.

Paul-Arthur Sauvageot
Plugin Generator

Plugin Generator is a framework which transforms pure-data patches into executables and/or audio plugins in various standards like VST, AAX, AU. It is born from the idea of switching seamlessly from prototyping to final implementation. By combining the Juce Framework and a pure-data virtual machine we were able to develop a framework which allows to quickly develop professional quality audio plugins, addressing the current pitfalls, like multi-instances problems.

Plugin Generator is an internal tool for us at AudioGaming, but it could become a lot more. By developing a graphical editor we could allow any sound designer to create their own audio plugins with no need of learning programming. We could also extends its compatibility to new platforms like gaming consoles, ARM embedded machines, or make it more inter-operable with other frameworks.

Would Plugin Generator be useful to you ? How can we improve it ? We would like to know what you think and discuss the pros and cons of such approaches and how we could give more control to modern sound designers.

André Bergner
Signal Flowz through C++ wires

Digital signal processing is ubiquitous in modern digital technology. Ranging from classical signal transmission, neural networks, image and audio processing, to time series analysis.

Flowz is a library that strives for writing digital signal processors in a declarative and composable manner that generate efficient code and integrates well with existing C++ code and frameworks.

Flowz is inspired by the Faust language and algebra of flownomials and implements a similar concept within C++. This embedded domain specific language allows to describe network layouts and processing of data flowing through these networks.

While the user can focus on the 'what should be processed', flowz will take care of connecting the *wires* between processing boxes and creating the state that is described implicitly by the flowz-expressions and which is needed by the signal processing algorithm.

Ben Supper
MPE: Making MIDI more expressive

Building new kinds of expressive musical instrument is challenging. Aside from almost unending physical and technical challenges are the problems of collaboration. No small company is an island. New instruments benefit from access to the existing ecosystem of music creation technology. Would-be competitors in a niche sector must achieve consensus and provide mutual compatibility to avoid market fragmentation and format wars.

The ubiquitous MIDI spec, now well into its fourth decade, does not provide a ready-to-use mechanism for conveying multiple dimensions of expressive control for each note of a performance, or even for conveying microtonal polyphonic pitch. MPE [MIDI Polyphonic Expression] started in 2014, when ROLI, Roger Linn, and many other interested parties agreed a set of best practices to circumvent these limitations. Last year, the MPE Working Group was founded by the MIDI Manufacturers Association, involving many more industry partners across several countries. An implementation of the draft spec is already available as a JUCE library, and MPE is on its way to becoming an official extension of MIDI.

Ben Supper is a system engineer at ROLI. He also chairs the MPE Working Group. Ben will explain the rationale of MPE, how it works, why you might want to be involved with it, and what its technical implications are. He will also provide some tentative insights into how to make a Working Group that works, and a specification that might thrive in a difficult world.

Ben Supper, Chris Pike, Chris Travis, Hans Fugal
Self-centred sound:  What will the VR revolution mean for you?

Immersive gaming has taken off at the second attempt, and VR experiences need decent soundtracks. Every major tech company, it seems, either has a 3D audio spatialisation framework or is working on one. Experimental multichannel recording formats from decades ago are being repurposed for VR sound, and the BBC is experimenting with binaural drama again.

As coders with the right technical skills and creative inclination, what will we have, what will we need, and what will be expected of us? As creative people, how might we use new tools to further our own projects?

A panel of leading technologists and creators will discuss where the immersive audio boom is heading, what our technical, commercial, and creative challenges are likely to be in the near future, and where the undertow from much larger players in silicon, software, and entertainment industries might lead us.

Christian Luther
Sharpening the Saw - Building and Cultivating Audio Intuition

Developing audio products requires broad knowledge of theory in signal processing, electronics, acoustics, music and more. However, the key to great innovative products is not the accumulation of textbook knowledge, but building and cultivating intuition. This is what gives us the ability to look behind maths, circuits and sheet music.

This talk is about sharpening the saw and developing a useful set of "thinking tools" for audio development. Such a toolbox not only makes theory much more fun, but also creates a never ending stream of inspiration and innovative ideas.

Christoph Hart
Turning the JUCE Javascript Engine into a Rapid DSP development tool

The Javascript engine in JUCE is perfectly suitable for scripting UIs or doing some light data processing. However with some modifications to its internals it can be transformed into a fully usable development tool for DSP applications.

In this demonstration I will retrace the required optimization steps and introduce a DSP scripting environment based on this tuned Javascript engine.

Costas Calamvokis
C++... whatever the question is, the answer is templates

Most C++ programmers will be familiar with templates from the STL. This talk will skim over the STL type applications before moving on to more interesting things. We'll look at how templates can be used to improve performance in audio applications, how they can help write more compact code and how they saved my skin when the project moved in an unforeseen direction. We'll also take a look at the variadic templates introduced in C++11 and the esoteric use of templates in a few of the Boost libraries.

The talk's title refers to an ongoing joke I have with a coworker, but basically it is true! Any language feature that turns out to be accidentally Turing complete has to be worth a close look!

Daniel Jones
Chirp: Sound as a medium for data transmission

Chirp is a technology for broadcasting information via sound, encoding data as a series of audible tones that can be played over the air and received by any low-end device with a microphone. It is designed for robustness in real-world scenarios, resilient to background noise and reverberation, enabling information to be shared seamlessly to any device within hearing range.

This talk describes the design and implementation of the Chirp audio protocol and cross-platform tech stack, the unique affordances of sound as a frictionless, broadcast networking medium, and how we envision a future ecosystem of machine-to-machine dialogues, from musical greetings cards to autonomous industrial robots to household Internet Of Things devices.

We'll also be showing new R&D in developing audio communication protocols for specific acoustic environments, and harnessing hardware "device farms" to perform mass optimisation and QA of audio code across hundreds of remote real-world mobile devices.

Dan Klingler, Jay Coggin
Apple Audio Technology Overview

Deeply integrated, professional audio technologies make Apple platforms an important mainstay for the audio community. Recently, Apple introduced several new audio frameworks and features that enhance the audio experience for both users and developers. This talk will cover several of these innovations including Audio Unit Extensions, AVAudioEngine, Bluetooth MIDI and Inter-Device Audio. Come hear how these technologies enable your app to work seamlessly with Apple products and third-party audio accessories.

David Rowland
Using Modern C++ with JUCE to Improve Code Clarity

Building on the ideas behind last year’s "Using C++11 to Improve Code Clarity: Braced Initialisers" of reducing code bloat to improve clarity, performance and robustness, this talk takes a wider look at modern C++ coding features and how to utilise them.

One major aspect of event based, application programming is the notion of "when something happens, do this". C++ can often get in the way of clearly expressing this intent. This talk aims to use modern coding styles, in combination with JUCE classes to more clearly and concisely express this intent.

In particular, a number of possible lambda and std::function applications are demonstrated, ranging from timers and asynchronous callbacks, to drawing methods and general delegation used to reduce dependancies on inheritance. A concise, practical look at std::async is also included with an aim to improve app responsiveness by simply and effectively parallelising intensive areas of code.

However, when transitioning to modern coding styles there can be some pitfalls. This talk also demonstrates some of these pitfalls and explains how to avoid them. It provides good practices for mixing JUCE and C++ code. In particular it looks at the behaviour of auto and type deduction used in conjunction with JUCE smart pointers and when to prefer std alternatives.

This example-led talk aims to introduce new paradigms and increase the number of tools in your toolbox whilst keeping code clear, robust and maintainable. It is not an explanation of how C++11/14 features work under the hood, but the possibilities they unlock.

Don Turner, Phil Burk
What’s New in Android Audio

This talk is covering the new audio features in Android Nougat and the capabilities of the newly released Pixel phones. The talk will be an opportunity to ask questions to Google engineers about all things Android audio. Presented by Phil Burk (Audio Framework Software Engineer) and Don Turner (Pro Audio Developer Advocate).

Ray Chemo, Eike Verdenhalven, Tim Adnitt
An introduction to the Native Kontrol Standard

Native Kontrol Standard (NKS) is Native Instruments’ extended plug-in format for all virtual instrument developers.

In this talk we will give you a tour through the NKS SDK. This enables you to make your product NKS compatible so that it seamlessly integrates with KOMPLETE KONTROL and MASCHINE. Learn about the different levels of integration, from the basics of tagging your content to prepare it for the Native Browser down to deep hardware integration

We will show you how your product will benefit from NKS and provide you with the knowledge to place the work into your development backlog.

Fabian Renn-Giles
The new JUCE multibus API

With the release of version 4.3, JUCE has achieved full audio multi-bus capabilities - both when hosting and creating audio plug-ins. With multiple audio busses it is possible to leverage audio plug-ins which process multiple streams of audio. This is useful for a wide-range of applications such as noise gates, mixing, spatialization...

In this workshop you will learn how to leverage this new API from the author who created the API. After an introduction to the API, we will build a few simple multi-bus plug-ins, a more advanced mixer plug-in and finish the session off by building a simple DAW using only built-in multibus AudioProcessors found in JUCE.

Felipe Tonello
Expand your Audio Application to the world of Embedded Linux

Linux is an Operating System Kernel with all the features you would expect in a modern fully-fledged Unix, including true multitasking, proper memory management and advanced multistack audio support. All of that makes it really attractive to build embedded audio devices. But the process of doing so can be a real pain. Many problems arises, such as build systems, application cross-compilation, root file system support and many others.

This talk will present a solution for these problems using an open source project called OpenEmbedded. In extension to describe few of the problems and solutions, it will present an extension to easily support JUCE applications and run a demonstration on how to do it.

Gebre Waddell
UI/UX Design Thinking and Best Practices for Audio Plugins

Design thinking encompasses specific methods and approaches used by modern designers during UI/UX development. Our talk will cover this subject through the lens of audio plug-in development. We will also cover best practices and common interface design elements through data analysis of the top 100 commercial plugins. We will cover topics including toolbars, flat design, parameter outlining, cross-modality and 1080p/4k compatibility. Included will be a brief video of users discussing their experience with interfaces. We will conclude with honoring our users — the producers and engineers who craft the recordings we love.

Giulio Moro, Andrew McPherson
Bela: hard real-time, low latency audio and sensor processing on a Linux embedded board

Bela is an embedded platform for ultra-low latency audio and sensor processing. Bela combines the connectivity of a microcontroller with the processing capability of a single-board computer. Consisting of a BeagleBone Black combined with a custom cape, Bela provides stereo audio I/O including 1W speaker amplifiers, 8 channels each of 16-bit analog I/O, and 16 digital GPIO pins, all sampled at audio rate.

Bela allows hard real-time audio performance on a Linux embedded board using a Linux kernel patched with the Xenomai extensions. An on-board microcontroller is used to read the inputs from the ADC and write the outputs to the DAC, using a buffer of memory shared with the main ARM core, but working independently from it. This acts as a sophisticated DMA controller, which allows the audio program to bypass the Linux kernel and the ALSA drivers for audio I/O operations. The Xenomai extensions allow the real-time audio code to run at a higher priority than the Linux kernel, providing sub-millisecond round-trip latencies.

Bela'shardware and software are open source. In March 2016 Bela launched on Kickstarter, distributing boards to over 500 backers worldwide.

Glenn Kasten
Got the jitters? Towards predictable performance

A look at various factors that can cause performance of a periodic real­-time workload to vary. How to measure and report this variability. How to reduce the variability, and deal with the remaining.

Glenn Kasten
Thinking Inside A Box: how secure computation will change the way you do dataflow processing

First introduced in 2005, the Linux seccomp system call was initially ignored, prompting Linus Torvalds to wonder if anyone even used it. Now security is more important than anything else, and seccomp has become a critical tool. We look at traditional and new applications for seccomp in media processing, including dataflow architectures.

Glenn Kasten
Will It Go Round In Circles: ring buffers redux

The circular buffer is the hammer of data structures for all your audio nails. It solves so many problems so well, it is tempting to use it everywhere. And we did, resulting in ten different versions, all subtly different. We'll look at the requirements that drove these implementations, the performance and security aspects of each, and what it took to merge most of them into a single code base. Open source and ready for re-use!

Ian Hobson
Compile-Time Signal Chains in C++

Signal graphs are often constructed using dynamic polymorphism, with a collection of nodes connected to each other via pointers at run-time. If we know the structure of our signal graphs before compilation, what advantages might there be in letting the compiler do the work of making our signal connections for us?

Can we achieve the holy grail of an easy-to-use, data-oriented, cache-friendly, faster-than-handwritten compile-time signal graph with minimal boilerplate?

Or, will we end up drowning in template error messages and impossible-to-read stack traces?

Ivan Cohen
Digital IIR Filters: history, state of the art, and some little secrets

Digital second order infinite impulse response (IIR) filters, often called "biquads", are very special tools used widely by audio developers and audio engineers / musicians as well, in equalizers, dynamic processors, synthesizers, modulation effects, reverberation simulators, virtual analog audio effects, almost every time a signal is oversampled... The implementation of these biquads is most of the time based on the article "Audio EQ Cookbook" from Robert Bristow-Johnson, and other classic lectures, even for the source code in the JUCE class IIRFilter.

However, what are somehow less known are the origin of these equations, the axioms involved, all the previous iterations of digital filter structures, and the drawbacks of the classic biquad filters. Today, expert audio developers have tons of ways for tweaking digital biquads filters, and simulating analog audio circuits involving them, which fix their inherent issues, to get more realistic, more interesting digital filters in their work.

In this talk, I'm going to show you that, if like me a few years ago, you think that you already know everything that you have to know about biquads thanks to classic articles on the subject, you are probably wrong. I'll share all the information I have on this subject about the past, the present and the future in the use of digital biquad filters. You'll know everything you have to know about biquads frequency and phase response, zero-delay feedback filters, their use in nonlinear processing, optimal simulation structures, as well as some little secrets used in commercial products, or studied by the scientific community...

Lucas Mengual
Modal Synthesis of Weapon Sounds

Sound synthesis can be used as an effective tool in sound design. This paper presents an interactive model that synthesizes high quality, impact-based combat weapons and gunfire sound effects. A procedural audio approach was taken to compute the model. The model was devised by extracting the frequency peaks of the sound source. Sound variations were then created in real-time using additive synthesis and amplitude envelope generation. A subtractive method was implemented to recreate the signal envelope and residual background noise. Existing work is improved through the use of procedural audio methodologies and application of audio effects. Finally, a perceptual evaluation was undertaken by comparing the synthesis engine to some of the analyzed recorded samples. In 4 out of 7 cases, the synthesis engine generated sounds that were indistinguishable, in terms of perceived realism, from recorded samples.

Martin Percossi
Magnetic Note Repeat (Or: The joy of Exponentially Weighted Moving Averages)

The exponentially-weighted moving average is a simple mathematical transformation that has very wide range of uses, from financial indicators, to visual animation. In this talk, we present a novel use of EWMAs: to create a new, intuitive, and responsive note repeat or arpeggiator effect, which adapts naturally to the user's timing mistakes.

Martin Robinson
Porting JUCE smart pointer usage to C++11 classes

JUCE has long included the template classes ScopedPointer and ReferenceCountedObject, which help to provide object lifetime management using “smart pointers”. With many projects moving over to be able to use C++11 (or later) compilers, the std::unique_ptr and std::shared_ptr classes offer a standardised way to achieve similar functionality. These classes are not direct replacements for the JUCE classes: there is some work to ensure that the behaviour is equivalent. This session will highlight the similarities and differences between these equivalent classes and give a guide to porting code that uses the JUCE smart pointers to the modern C++11 idioms.

Michael Zbyszyński
Rapid-API: a toolkit for machine learning & embodied interfaces

This lecture will present the newly-developed RAPID-API, a comprehensive, easy to use toolkit and JUCE Library that brings together different software elements necessary to integrate a whole range of novel sensor technologies into products, prototypes and performances. API users have access to advanced machine learning algorithms that can transform masses of sensor data into expressive gestures that can be used for music or gaming. A powerful but lightweight audio library provides easy to use tools for complex sound synthesis.

The RAPID-API was created by the RAPID-MIX consortium, which aims to accelerate the production of the next generation of Multimodal Interactive eXpressive (MIX) technologies by producing hardware and software tools and putting them in the hands of users and makers. We have devoted years of research to the design and evaluation of embodied, implicit and wearable human-computer interfaces and are bringing cutting edge knowledge from three leading European research labs to a consortium of five creative companies.

Oron Cherry, Amir Arama
How to Make JUCE Plugins SoundGrid-Compatible

SoundGrid is an environment for powerful low-latency audio signal processing in real time. It is designed by Waves Audio, the world-leading developer of digital audio processing technologies. In SoundGrid, all processing takes place on a dedicated SoundGrid server, a Linux server with Intel CPU. Audio is streamed in and out of the server via standard Ethernet, which also gives SoundGrid networking capabilities. Because the server is dedicated to signal processing only, with no interruptions from GUI, user events, graphics, etc., it achieves very good stability and performance. The server’s in-to-out latency is less than 1 ms, and the processing power is enormous.

Several Waves software solutions make SoundGrid easily available to any audio engineer, live or in the studio. Studio engineers can run SoundGrid plugins on the eMotion ST mixer or the StudioRack plugin host, while live sound engineers can run their plugins on the popular MultiRack host application (used on massive tours by Bruce Springsteen, Pearl Jam, Lady Gaga and many more) or on the innovative eMotion LV1 software mixing console.

The above hosts and mixers are open not just to Waves plugins, but to SoundGrid-compatible plugins by other companies as well. Any plugin running on a Waves host needs to support WPAPI (Waves Public API). Implementing WPAPI through JUCE makes this almost seamless. This presentation explains how this is done, and why the process involved is easy and simple.

Pete Goodliffe
The Golden Rules of audio programming (and how to break them)

Audio programming requires skill and discipline, if you want to create a rock-solid reliable, high quality product. And, of course, you do want to create rock-solid, reliable, high quality audio products! There are a number of established "best practices" that every audio developer must know and follow.

This talk will investigate a number of these “golden rules”. You’ll see why they’re important and why you can’t ignore them. It will then explain how you can work around them in reasonable, practical ways.

R. Revin Nelson
Musical Interaction with 3D Touch

With 3D Touch, Apple has provided the capability to the iPhone into the same league of expressive music controllers as the Linnstrument, Seaboard, and Eigenharp. Using a basic example application, we'll walk through the nitty-gritty technical details of integrating the UITouch SDK into your app. Then we'll focus on some of the higher-level usability nuances of turning those touches into something musical. By the end of the talk, you'll have a solid understanding of everything you need to know to integrate 3D Touch into your mobile instrument as musically and intuitively as possible.

Richard Meyer
Developing iOS synths

JUCE now fully supports iOS development. This session will discuss the practical details of porting a synth plug-in to iOS.

Topics will include: ul li How to set-up your project in the ProJucer li UI considerations & graphical performance li Frequency scaling ie: variable CPU clock speed and maximising battery life li Support for AUv3 + sharing presets & samples between the app and the AUv3 li Support for VSTs li Adding platform-specific features such as in-app purchases.

Sean Soraghan
Developing a real-time audio feature extraction tool in JUCE

This talk is based around the development of a real-time audio analysis tool using JUCE. The tool has been developed as part of a long-term research project and has been used in numerous live audio-visual performances. The talk will describe some of the major issues faced during the development of the tool, and a wider discussion will be given on the use of JUCE in a research context. The talk will also provide some discussion on the difference between developing for academia and developing for industry, from the perspective of someone who works in the middle.

SKoT McDonald
Size Isn't Everything, or, BFDLAC: A fast, lossless compression algorithm for multichannel drum sounds.

BFDLAC is a fast, lossless audio compression algorithm designed for efficient disk streaming, parallel decoding, and extensive SIMD instruction use. Whilst not quite as compact as FLAC, it is much faster - an important quality when dealing with large audio channel counts.

Stefan Gränitz
Behind the Scenes of the Projucer Live-Build Engine

The live-build engine of the Projucer utilizes the power of LLVM to bring live coding to C++. While writing code in the Projucer or your preferred IDE, changes are automatically picked up, compiled and injected into a running instance of your JUCE component. This way we reduce the edit-compile-test cycle to the absolute minimum and speed up development tasks like UI and DSP design as well as rapid prototyping. We will have a short look behind the scenes of the live-build engine and discuss the opportunities and limitations of live coding in C++

Stephane Letz
Faust DSP language eco-system meets JUCE

Faust is a domain specific language for real-time audio DSP processing and synthesis. Starting from the dsp source code, standalone applications, plug-ins, Web audio pages… can be automatically generated. Thanks to its modular architecture model, the generated audio computation can be connected to the audio layer and user interface manager.

The presentation will first sweep all important aspects of Faust ecosystem : from the language and compiler (backends and targets…) to the deployment tools and architectures (static and dynamic compilation chains, IDE, remote compilation Web services..), hardware support (standard machines, mobiles, embedded platforms…). Several example of integration with external projects and applications will be demonstrated. Finally Faust as a support language in research and teaching will be discussed.

Second part will focus on integration with the JUCE framework : how statically generated programs (using the C/C++ backend) or dynamically generated ones (using the LLVM backend) can be integrated and how the clang/LLVM based Projucer compilation chain could possibly be used. Several recent developments like MIDI, OSC and polyphonic architectures will be demonstrated.

Thibaut Carpentier
Challenges with massively multichannel audio applications and music productions

There is today a growing interest in sound spatialization for multimedia creation: real-time 3-D audio rendering in VR environments, 3-D sound systems in movie theaters, binaural radio broadcast, multichannel sound installations, etc. However, the current state of the art of software tools reveals that most applications are ill-suited to the needs of multichannel audio production. In particular, most of the DAWs lack flexibility for high-channel-count audio processing and inserting/routing spatialization plugins is difficult or tedious.

In this talk we will present a viable workflow for authoring and rendering massively multichannel spatialization. This includes a virtual mixing desk that allows to comprehensively mix, reverberate and spatialize sound materials. Developed with Juce, this environment supports a wide range of 3-D spatialization techniques (Ambisonic, binaural, etc.) and offers a flexible routing architecture. Additionally, the application can be controlled via an OSC plugin (also Juce-based) that transmits automation parameters between a DAW and the remote renderer.

This presentationwill describe the current challenges with multichannel audio, detail the signal processing and software architecture of the proposed workstation, demonstrate some examples of realisations, and discuss prospective improvements for spatialization tools.

Tim Blechmann
High Performance Audio Programming On Modern Out Of Order CPUs

Most CPUs these days have a pipelined front-end with an out of order backend. In order to achieve the best performance developers certain SIMD programming techniques can be utilised.

This talk gives a rough overview on the architecture of modern Intel CPUs, explaining it's pipeline uops scheduler and execution units with a focus on the implication on high-performance audio programming. We then discuss certain aspects like pipeline stalls, data hazards or data dependencies and show some real-world code examples explaining how to avoid them.

Timur Doumler
C++ In the Audio Industry, Episode III: The Lock-Free Queue

The lock-free queue (aka lock-free FIFO) is arguably one of the most important data structures in audio programming. It is universally used to synchronise the real-time audio processing thread with incoming data (such as e.g. MIDI) and outgoing data (such as e.g. visualisers) on other threads. For a professional C++ audio programmer, it is as important to know the lock-free queue as it is to know std::vector.

There are several popular implementations around, such as Boost.Lockfree and the JUCE AbstractFifo. However many developers use those classes without actually knowing how they work exactly, why we use them, how we implement them, and what the typical use cases and considerations are. This talk will remedy that situation.

Vlad Voina
Build Modern GUIs Fast with Projucer

Writing complex, modern GUI applications in C++ is not always a graceful experience. The time between making a code change and seeing the result is too long and very often filled with software engineering problems that really should not occur in GUI development.

Projucer's Live Building is a great tool that combined with the appropriate software design strategy, can dramatically improve your GUI development workflow. This session will demonstrate a way of writing Projuceable classes and offer insight into software architecture and efficient GUI development.