Closing remarks with Molt Honorable Sr. Vicent Soler i Marco, Counselor of Treasury and Economic Model of the Generalitat Valenciana.
They might not be delivering our mail (or our burritos--tacocopter.com) yet but drones are now simple, small, and affordable enough that they can be considered a toy. You can even customize and program some of them! The Parrot AR Drone has an API that let's you control not only the drone's movement but also stream video and images from both of its cameras. I'll show you how you can use Python and node.js to build a drone that moves all by itself.
Greg Lamp is the co-Founder and CTO of Yhat. In this role, Greg leads development of Yhat's core products and infrastructure and is the principal architect of the company's cloud and on-premise enterprise software applications. Greg was previously a product manager at OnDeck, a fintech startup in New York and before that an analyst at comScore. Greg is a graduate of the University of Virginia.
Training deep networks is a time-consuming process, with networks for object recognition often requiring multiple days to train. For this reason, leveraging the resources of a cluster to speed up training is an important area of work. In this talk we'll show how to use an AWS Spark cluster to train a model quickly from a laptop at a very little cost (around 10€).
Vincent Van Steenbergen is a R&D and Backend Engineer at IDAaas (Intelligent Data Analysis as a Service) a research and development spin-off from the University Paris 13. IDAaaS develops services in Intelligent Data Analysis such as Data Mining, Knowledge Discovery in Databases and Predictive Analytics. Vincent has been developing all kind of software for as long as he remembers.
Deep Learning (DL) is becoming a big tsunami in the Machine Learning community. This talk aims at introducing DL, its motivation and main techniques. However, part of this talk is also devoted to demystify DL. What are the main advantages but also the main drawbacks of DL?. And what are the key issues that the practitioners have to consider?
Roberto Paredes is an Associate Professor at Departamento de Sistemas Informáticos y Computación DSIC of the Universidad Poliécnica de Valencia UPV. He belongs to the Pattern Recognition and Human Language Technologies Research Centre PRHLT. Roberto Paredes is the Director of the PRHLT and the President of the Spanish AERFAI Association. His main research interests are around the statistical learning, machine learning and more recently neural networks and deep learning.
Trovit in short time became one of the leaders in the online classified advertising industry. We adopted Hadoop and MapReduce in order to manage all our content in a scalable way. However, we faced its limitations: that’s the reason why we looked at Spark. Right now, early 2016 we already adopted it for good and it is constantly bringing fresh solutions to our business. The talk will consist of an introduction to Trovit and its Big Data infrastructure, and we will specifically illustrate how Spark works with a demo.
Ferran Galí i Reniu is passionate about web scale distributed systems. Working on Big Data technologies for several years he gained expertise solving problems that require a massive amount of data processing. Architecting the deployment of Hadoop on a cluster of machines, developing new solutions or playing data scientist to make the business thrilling are some of the day-to-day tasks he has to deal with. Right now he is working in Trovit building the best search engine for classified ads.
ML services are quickly becoming a commodity, and they will be taken for granted by developers and computer users alike in the near future. The building blocks for ML as an ubiquitous service are already in place, almost always in the form of remote APIs that provide a first level of abstraction over ML problem-solving and, specially, obviate scalability and resource allocation issues. But that's not enough: those building blocks still leak implementation details inessential to the application developer that needs to provide domain-specific solutions. We need to ascend a couple of rungs in the abstraction ladder and provide domain-specific languages to describe ML solutions without nitty-gritty details unrelated to the problem at hand, offering non-experts the possibility of automating their ML solutions. In this talk, we'll discuss our experience designing and developing BigML's data wrangling and ML workflow DSLs, Flatline and WhizzML, and how they generalize to similar ML services and APIs.
Jose A. Ortega Ruiz is part of the founding team of BigML, a little startup trying to apply machine learning and other AI techniques to big data, and make them accessible to non-specialists. He was hacking for Oblong from 2008 to early 2011. Before that, he worked for Google (from July 2007). From June 2005 to May 2007, he worked on embedded software development for the scientific payload of LISA Pathfinder. He was a theoretical physicist in a previous life, and wrote a Ph. D. thesis on gravitational wave detectors. He also got a bachelor’s degree in computer science. Between 2003 and 2005, he taught courses on programming and computer networks at the Universitat Autonoma of Barcelona, where he was part of the mobile agents research group.
Beginners in machine learning usually presume that a proper assessment of a predictive model should simply comply with the golden rule of evaluation (split the data into train and test) in order to choose the most accurate model, which will hopefully behave well when deployed into production. However, things are more elaborate in the real world. The contexts in which a predictive model is evaluated and deployed can differ significantly, not coping well with the change, especially if the model has been evaluated with a performance metric that is insensitive to these changing contexts. A more comprehensive and reliable view of machine learning evaluation is illustrated with several common pitfalls and the tips addressing them, such as the use of probabilistic models, calibration techniques, imbalanced costs and visualisation tools such as ROC analysis.
Jose Hernandez Orallo, Ph.D. is a senior lecturer at Universitat Politecnica de Valencia. His research areas include: Data Mining and Machine Learning, Model re-framing, Inductive Programming and Data-Mining, and Intelligence Measurement and Artificial General Intelligence.