Seminars Home Schedule Abstracts Directions Forms Archives
UPF-DTIC Research Seminar Abstracts, 2015-2016



Clarence Barlow

Clarence Barlow, UC Santa Barbara
Tuesday, June 14, 2016, 3:30 pm, 55.410

Host: Xavier Serra (MTG)

Title: On Synthrumentation - the Spectral Analysis of Speech for Subsequent Resynthesis by Acoustic Instruments

Abstract:
‘Synthrumentation’ is a technique for the resynthesis of speech with acoustic instruments developed by the composer Clarence Barlow in the early 1980s. Over the past decade instrumental speech synthesis has been thematised by a diverse range of composers (e.g. Peter Ablinger or Jonathan Harvey); however, Barlow’s work is rarely accorded the credit it deserves for the pioneering role it played in this field. This article seeks to explain the basic mechanics of the synthrumentation technique and also demonstrate its practical application through an analysis of Barlow’s ensemble piece Im Januar am Nil composed between 1981 and 1984. It should become apparent that Barlow never uses synthrumentation in its conceptually pure form, but rather its realisation is always integrated into an overarching musical context, which reflects Barlow’s general approach to musical invention allowing different factors to interact.

[top]


Picture not available!

Rosa Montañà, TDX and PhD thesis coordinator, UPF library
Thursday, June 9, 2016, 3:30pm, Room 52.S29

Title: Copyright in PhD thesis reports.

Abstract:
In this talk we are going to talk about issues that may arise when publishing online in the TDX repository, how to include published articles, what to do in case of patents and technology transfer... and other related information. We will cover the following topics:

  • Publication of the thesis in a repository: TDX and RD99/2011.
  • Declaration by the author.
  • Copyright vs Creative Commons licences.
  • Published articles in thesis.
  • Publication embargo: delayed publication.
  • Theses under knowledge protection or technology transfer.
  • Published articles in thesis: dissertation by papers and other copyright issues.
  • An approach to research data in thesis.

[top]


Andreas Kaltenbrunner

Andreas Kaltenbrunner, EURECAT
Thursday, May 26, 2016, 3:30pm, 55.410

Host: Vicenç Gómez (AI)

Title: Studying Societal Debates through Wikipedia

Abstract:
Wikipedia is a huge global repository of human knowledge, and at the same time one of the largest experiments of online collaboration. Its articles, their links and the negotiations around their content tend to reflect societal debates in different language communities. I will present a series of works dealing with this topic.

First I will focus on article talk pages and analyse the structure and growth process of discussions around Wikipedia articles to unveil patterns of collective participation and content creation along the temporal dimension.

Then, I will present a large-scale analysis of the emotional expressions and communication style of editors in these discussions, focusing on how emotion and dialogue differ depending on the editors' status, gender, and communication network.

Finally, I will present Contropedia, a platform that adds a layer of transparency to Wikipedia articles. Combining activity from the edit history and discussion in talk pages, the platform uses wiki links as focal points to explore the development of controversial issues over time.

Biography:
Andreas Kaltenbrunner (male) is head of the Digital Humanities Department at Eurecat. He obtained his PhD degree in 2008 from the University Pompeu Fabra in Computer Science and Digital Communication with thesis topic on social media, and holds a Master’s degree in Mathematics and Informatics from the Vienna University of Technology obtained in 2000. His research focuses on social media and social network analysis. He has authored about 50 publications in these areas. Andreas uses methods from computer science and the study of complex systems to resolve sociological research questions.

[top]


Arianne Bercowsky

Arianne Bercowsky, Biomedical Engineering Student at UPF
Wednesday, May 18, 2016, 4:30 pm, room 55.410

Host: Oscar Camara (PhySense)

Title: Colorectal Cancer preventive treatment approach

Abstract:
Last year the WHO announced that the consumption of processed meat could increase the probabilities of developing cancer. Polyamines, a group of essential molecules for cell development, were found to be responsible for this carcinogenic activity.

We aim to develop a preventive colorectal cancer treatment, which will consist on a probiotic that can reduce the amount of polyamines that our body absorbs, and thus maintain their concentrations within a healthy level. This probiotic will consist on a mix of different engineered bacteria from the human gut microbiota that will eliminate the excess of polyamines. We will also create a marker that can test cancer risk probability using bacteria able to detect high concentrations of polyamines in urine. In such case, a reporter gene will warn the patient of his cancer likelyhood.

[top]


Malcolm Bain

Malcolm Bain, id Law Partners
Thursday, May 19, 2016, 3:30pm, 55.410

Host: Emilia Gomez

Title: Software licensing – from basic to advanced licensing and business models

Abstract:
In nearly any area of R+D, you deal with software – either software tools you use to do the research, or creating software as a result. Which means you need to take into account software licensing issues, and appropriate management. This seminar looks at the basics of software licensing, the different licensing models from proprietary to open source, and links these models with business models (for exploitation of R+D) for a fuller understanding of how the law and licensing impacts project management and exploitation.

Biography:
Malcolm Bain is an English solicitor and Spanish lawyer, specialising in Information Technology and Intellectual Property law, and co-founder of id law partners (now part of BGMA, a Barcelona based law firm). He has a wide experience representing clients on both sides of IT transactions, and advises on licensing, software contracts, technology transfer, copyright, privacy and trademark issues. He has participated in various R+D projects and written and lectured on many aspects of IT law, e-commerce and internet regulation.

[top]


Aurelio Ruiz

Aurelio Ruiz, UPF-DTIC
Thursday, May 12, 2016, 3:30pm, 55.410

Title: Reproducibility in research

Abstract:
An objective of the María de Maeztu Strategic Research Program is “to increase the impact of our research by increasing the impact of the publications, datasets and software tools, and take advantage of this impact to establish and consolidate partnerships”. It includes actions to promote that the research results, datasets and tools are discoverable, interpretable and reusable, including the publication of the data and software together with the publications. During this session, we will discuss some of the topics linked to "reproducible research", including the increasing requirements in making datasets and computer code available by funding agencies, publishers and potential mechanisms to promote it in our organisation being elaborated in the context of the Maria de Maeztu program.

Suggested reading:

Biography:
Aurelio Ruiz has a Telecommunications Engineering Degree (Universidad Carlos III de Madrid) and a Master in Science Management and Leadership (IDEC, UPF). After research and educational traineeships at Technical University Munich, EPFL and CERN, he was responsible for project management in the banking and aeronautical sectors with projects in Europe, Asia and Africa. Since 2006 he is with UPF, currently working in the management of the Maria de Maeztu strategic research program.

[top]


Miguel Ballesteros

Miguel Ballesteros, UPF-DTIC
Tuesday, May 10, 2016, 12:30 pm, Room 52.121

Group: TALN

Title: Deep Learning for Transition-based Natural Language Processing.

Abstract:
This talk describes new, sequential and efficient algorithms for analysis of text data. Transition-based models use a transition system, or abstract state machine, to model structured prediction problems, for example syntactic dependency parsing, as a sequence of actions. We propose a technique for learning representations of these states. Our primary innovation is a new control structure for sequence-to-sequence neural networks: the stack LSTM. Like the conventional stack data structures used in transition-based parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate efficient transition-based natural language processing models that captures three facets of the state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken, and (iii) the complete contents of the stack of partially built fragments, including their internal structures. We also discuss different approaches to word representations, by modeling words and by modeling characters. The latter improves the way of handling out of vocabulary words without using external resources and improves the performance in morphologically rich languages. Transition-based modeling for natural language processing is not limited to syntactic parsing. In this talk I will explain how we successfully applied Stack-LSTMs to dependency parsing, phrase-structure parsing, language modeling and named entity recognition with outstanding results.

Biography:
Miguel is a Postdoctoral Researcher at Pompeu Fabra University. He was a visiting postdoc/lecturer in Carnegie Mellon University (U.S). Previously, he completed his PhD, Masters and undergraduate degree at the Complutense University of Madrid. Miguel works in the intersection of natural language processing and machine learning with a special interest in syntactic analysis, dependency parsing and phrase structure parsing. He is also exploring different natural language processing problems including semantic parsing, named entity recognition, coreference resolution and language modeling. Moreover, he has an interest in accessibility with a strong belief that computational linguistics can have a significant impact on improving society.

Streaming Video:

[top]


Abelardo Pardo

Abelardo Pardo, University of Sydney
Thursday, May 5, 2016, 3:30pm, 55.410

Host: Davinia Hernandez-Leo (GTI)

Title: Feedback at scale with a little help from my algorithms

Abstract:
Formative feedback has been identified as one of the factors with the largest potential to improve educational experiences. However, providing adequate feedback in the right form, at the right time, at the right level is not only challenging, but also risky. Academics in higher education institutions are increasingly under pressure to solve the tension between larger student cohorts in active learning scenarios and the quality of feedback given to students. Can technology help in this area? The increasing amount of tasks that are mediated by technology offers the possibility to obtain a detailed digital footprint of the students. Areas such as intelligent tutoring systems or artificial intelligence in education propose the creation of expert systems that usually support students in a completely automated setting. Can data be used to amplify rather than replace the role of an instructor? Can data support higher order strategic decisions for the students? In this talk we will explore some ideas about how to combine educational technology, data collection and prediction algorithms with current tasks carried out by instructors to amplify their effect in active learning scenarios.

Biography:
Dr. Abelardo Pardo is Associate Head of Teaching and Learning and Senior Lecturer at the School of Electrical and Information Engineering, The University of Sydney. He has a PhD in Computer Science by the University of Colorado at Boulder. He is the director of the Learning and Affect Technologies Engineering (LATTE) laboratory specialized in educational technology. His areas of research are learning analytics, software for collaborative and personalized learning, and technology to improve the student experience and teaching practice. He is also research fellow at the LINK Research Lab (The University of Texas at Arlington), manager of the Engineering and Technology Program at the STEM Teacher Enrichment Academy (The University of Sydney), and member of the executive board of the Society for Learning Analytics Research (SoLAR).

[top]


Roberto Martinez-Maldonado

Roberto Martinez-Maldonado, University of Technology, Sydney (UTS)
Wednesday, May 4, 2016, 3:30 pm, 55.410

Host: Davinia Hernandez-Leo (GTI)

Title: Multi-modal sequence mining and analytics of face-to-face collaborative learning

Abstract:
Learning to collaborate is important. But how does one learn to collaborate face-to-face? What are the actions and strategies to follow for a group of students who start a task? This talk will introduce our current work aimed at analysing aspects of students’ activity when working in digital ecologies enriched with sensors for identifying users, and also at multi-display settings. This strand of research is seeking out to automatically distinguish, discover and distil salient common patterns of interaction within groups, by mining the logs of students’ actions, detected speech, changes in group’s artefacts, etc. The talk will showcase a number of group situations where multiple people are engaged in creative tasks that require design thinking and sense making. Multiple data mining techniques have been used in these scenarios to generate understanding of collaborative group processes including: classification, sequence pattern mining, process mining and clustering techniques.

Biography:
Roberto Martinez-Maldonado is a postdoctoral research associate in the Connected Intelligence Centre (CIC) at the University of Technology, Sydney (UTS), Australia, working with Prof. Simon Buckingham Shum. He obtained his doctorate degree in 2014 from the Computer Human Adapted Interaction Research Group (CHAI) at The University of Sydney, Australia. His areas of expertise include Human Computer Interaction, Learning Analytics, Artificial Intelligence and Education. In the past 5 years, his research has been focused on applying data mining techniques to help understand how people learn and collaborate, empowering people with emerging technologies, combining available technologies for capturing traces of collaboration and helping teachers to orchestrate their classroom through the use of interactive surfaces.

[top]


Simone Tassani

Simone Tassani, UPF-DTIC
Thursday, April 28, 2016, 3-5 pm (2 hours), Room 55.410

Group: SIMBIOSYS

Title: Session 3 of 3: Statistical Analysis and Design of Experiments

Abstract:
The third lesson will present Latin and Greek-Latin squares as powerful tools for experiment reduction. Will be introduced the concept of orthonormality for decomposition of the effects. The same decomposition will be later be used for the 2k project and its generalization to three or more factors. Finally the algorithm of Yates will be presented for the systematic calculation of the effects.

[top]


Simone Tassani

Simone Tassani, UPF-DTIC
Thursday, April 21, 2016, 3-5 pm (2 hours), Room 55.410

Group: SIMBIOSYS

Title: Session 2 of 3: Statistical Analysis and Design of Experiments

Abstract:
The second lesson will introduce multiple comparisons and the different kind of statistical errors that must be take into consideration in order to perform a correct analysis (i.e. Type I and II error and multiple comparison errors). Two factor project will also be discussed and the study of interaction among the factors will be defined. Finally it will be shown how to perform an analysis in case of absence of repetitions.

[top]


Alfonso Martinez

Alfonso Martinez, UPF-DTIC
Tuesday, April 19, 2016, 12:30 pm, room 55.410

Group: ITC

Title: Some Results on Mismatched Decoding in Information Theory

Abstract:
Over the past decades, and through the sister concepts of entropy and channel capacity, which respectively characterize the best solutions to the dual problems of source and channel coding, information and communication theory have guided engineers and computer scientists in the design and implementation of ever more efficient communication systems. Notwithstanding this success, the recent trend to communicate over short temporal durations has undermined two underlying assumptions in the prevailing theoretical analysis. First, perfect knowledge of the stochastic nature of the channel may be difficult to acquire, a fact which renders it impossible the use of optimum decoding rules and makes it necessary to apply invoke a mismatched decoding perspective. Secondly, efficient mathematical tools valid for very long transmission durations, such as the asymptotic analysis implicit in large-deviation theory, are of difficult justification, and ought to be replaced by new methods valid for arbitrary transmission lengths. In my four years as a Ramón y Cajal Fellow, my research has focused on developing new methods and tools that address these challenges in mismatched decoding at arbitrary, i.e. finite, transmission lengths. More specifically, and in contrast to the focus on achievable rates prevalent in most existing work on mismatched decoding, my work has focused on novel analysis methods of the random-coding error probability as a benchmark of the best possible performance of channel codes in this context. The results of this research in communication and information theory have been presented in 26 conference papers, 7 of which were invited, and 6 published and 3 submitted journal papers all in the IEEE Transactions on Information Theory (Impact factor in 2013: 2.65). In this talk, I will review the main outcomes of this work and suggest some future research lines that build on it.

Biography:
Since November 2011 I have been with the Department of Information and Communication Technologies, Universitat Pompeu Fabra, in Barcelona, where I am currently a Ramón y Cajal Research Fellow. I obtained my M. Sc. degree in Electrical Engineering in 1997 from the University of Zaragoza, in Spain. In the period from 1998 to 2003 I was with the research centre of the European Space Agency (ESA-ESTEC) in Noordwijk, The Netherlands, working as a systems engineer. Our work on APSK modulation was instrumental in the definition of the physical layer of DVB-S2. From 2003 to 2007 I was a Research and Teaching Assistant at the Signal Processing Systems group of the Technische Universiteit Eindhoven. My research focused on optical communications, and more specifically on the links between classical and quantum information theory. In my Ph. D. thesis, entitled "Information-theoretic analysis of a family of additive energy channels", I put forward and studied the additive energy channels, a new family of channel models for non-coherent communications. In the same period, I coauthored a monograph on "Bit-Interleaved Coded Modulation", a widely used technique that matches the simplicity of binary coding with the efficiency of non-binary modulation. In the years 2008-2010 I was a post-doctoral fellow with the Information-theoretic Learning Group at CWI, in Amsterdam. In 2011 I was a Research Associate with the Signal Processing and Communications Lab at the Department of Engineering, University of Cambridge.

[top]


Simone Tassani

Simone Tassani, UPF-DTIC
Thursday, April 14, 2016, 3-5 pm (2 hours), Room 52.S29

Group: SIMBIOSYS

Title: Session 1 of 3: Statistical Analysis and Design of Experiments

Abstract:
During the first lesson some basics of statistics will be showed. Concepts of normal distribution, average, standard deviation and variance will be introduced together with the general aim of a design of experiment. The class will proceed with the design of a monofactorial experiment, the introduction of the statistical model and the description of how to perform an Analisis of Variance (ANOVA).

[top]


Kadi Bouatouch

Christian Bouville

Kadi Bouatouch and Christian Bouville, INRIA/IRISA Rennes, France
Monday, April 11, 2016, 11:00 am, 55.410

Host: Josep Blat (GTI)

Title: Toward More and More Realism in Computer Graphics

Abstract:
Photorealistic rendering is a computer graphics discipline which aims at synthesizing images of virtual scenes which look as close as possible to what a real photo of that virtual scene would be. This is achieved by a physically-based simulation of light propagation and of light interaction with the objects composing the virtual scene. In this seminar, Kadi Bouatouch and Christian Bouville will go through the most important stages of photorealistic rendering, from material modeling to light propagation simulation algorithms, with a particular focus on Quasi-Monte Carlo spherical integration.

Biography:
Kadi Bouatouch graduated electronics and automatic systems engineer (ENSEM 1974). He was awarded a PhD in 1977 and a higher doctorate in computer science in the field of computer graphics in 1989. He is working on global illumination, lighting simulation for complex environments, GPU based rendering and computer vision. He is currently Professor at the University of Rennes 1 (France) and researcher at IRISA (Institut de Recherche en Informatique et Systemes Aléatoires).

Christian Bouville is presently invited researcher in the FRVSense team at IRISA in Rennes (France). He has been team leader, project leader and Emeritus expert at Orange Labs until 2006 and has been involved in many European and national projects. His main fields of research are now global illumination models and image-based rendering with a special interest in machine learning approaches.

[top]


Carla Ràfols

Carla Ràfols, UPF-DTIC
Thursday, March 10, 2016, 3:30pm, 55.410

Group: WiCom

Title: Cryptography for Non-Cryptographers

Abstract:
The need for secret communication is immemorial. For instance, we still have records of some of the encryption algorithms which were used by the Greeks or the Romans to communicate secretly.

Since these times we have naturally seen a dramatic evolution of encryption algorithms, which must be secure against very powerful computers. But beyond this, the advent of digital technologies has created the need to realize other tasks securely, for instance, to sign documents digitally. Nowadays, we have a large set of cryptographic protocols which can realize different tasks (encrypting, signing, electronic voting,...) with strong security guarantees.

In this talk I will review some of the fundamental principles guiding research in cryptographic protocols. I will illustrate the need for rigorous security definitions which are the basis of the notion of "provable security" and review some classical results and constructions. I will put a special emphasis on the notion of "zero-knowledge proofs" which allow a party to convince another party of the validity of a certain statement while hiding all additional information.

Biography:
Carla Ràfols is a UPF Fellow at the DTIC department since October 2015. She studied Mathematics at the University of Barcelona and holds a PhD in Applied Mathematics from the Polytechnical University of Catalonia. Before coming to UPC, she was a postdoctoral researcher in the Foundations of Cryptography group at the Ruhr University Bochum, in the Horst Görst Institute fot IT Security. Her research focuses on several aspects of theoretical cryptography.

[top]


Alexandros Karatzoglou

Alexandros Karatzoglou, Telefonica Research
Thursday, March 3, 2016, 3:30pm, 55.410

Host: Gergely Neu (AI)

Title: Deep Learning

Abstract:
Deep Learning (i.e. the return of Neural Networks part deux) is one of the most active and interesting areas in Machine Learning at the moment. Alexandros will provide an introduction to basic deep learning concepts and models, going over simple feedforward networks, convolutions, autoencoders up to recurrent networks for sequential data.

[top]


Aurelio Ruiz

Aurelio Ruiz, UPF-DTIC
Thursday, February 25, 2016, 3:30pm, Room 52.S31

Title: Communication skills in science: Preparing for Research in 4 Minutes

Abstract:
UPF has launched the "Research in 4 minutes" competition, with the purpose of underscoring the importance of communication skills in the dissemination of science. During this seminar, we will discuss some approaches to maximise the possibilities to succeed in this competition, as well as in any other context with limited time to present your work to non-experts.

Suggested reading: Communication: Two minutes to impress

Biography:
Aurelio Ruiz has a Telecommunications Engineering Degree (Universidad Carlos III de Madrid) and a Master in Science Management and Leadership (IDEC, UPF). After research and educational traineeships at Technical University Munich, EPFL and CERN, he was responsible for project management in the banking and aeronautical sectors with projects in Europe, Asia and Africa. Since 2006 he is with UPF, first within the Computational Imaging and Simulation Technologies in Biomedicine Research Group, later within the UPF Research Services and, currently, as the Promotion Officer of the Department of Information and Communication Technologies and the associated Polytechnic School.

[top]


Diarmuid P. O'Donoghue

Diarmuid P. O'Donoghue, Maynooth University, Ireland
Thursday, February 11, 2016, 3:30pm, 55.410

Host: Horacio Saggion (TALN)

Title: Computational Modelling of Analogy and Blending for scientific creativity

Abstract:
Computational creativity begun to emerge as a significant discipline, with companies like IBM and Google engaged in research in this area. I will outline some of the characteristics of computational creativity and how analogical reasoning can enrich our understanding of some aspects of creativity. I will describe the process of analogical reasoning and its use within computational creativity – particularly for scientific reasoning. I will outline some of the psychological evidence that underpins our understanding of the analogy process and how computational modelling has contributed to, and benefitted from, these findings. The role of analogy in the process of conceptual blending will also be addressed. I will reference some recent work on the Dr Inventor project.

Biography:
Diarmuid O’Donoghue received his B.Sc. and M.Sc. in Computer Science from University College Cork (UCC) and his Ph.D. from University College Dublin (UCD). He lectured briefly in UCC before joining the Department of Computer Science at Maynooth University. He has a long-standing interest in computational cognitive modelling and its use in computational creativity, especially for scientifically oriented creative reasoning. Diarmuid also received a postgraduate diploma in higher education (PGDHE) from Maynooth University. He is scientific officer for the “Dr Inventor” FP7 project with UPF and others and is also program director for the Erasmus Mundus Double MSc in Dependable Software Systems (DESEM) with U. St. Andrews (Scotland) and U. Lorraine (France).

[top]


Bob Sturm

Bob Sturm, Queen Mary University of London
Thursday, February 4, 2016, 4:30pm, 52.123

Host: Xavier Serra (MTG)

Title: The scientific evaluation of music content analysis systems: Toward valid empirical foundations for future real-world impact

Abstract:
Music content analysis research aims to meet at least three goals: 1) connect users with music and information about music; 2) help users make music and information about music; and 3) help researchers develop content analysis technologies. Standard empirical practices used in this discipline, however, have serious problems (as noted in the MIReS 2013 Roadmap, and [2-5]). I present three case studies that exemplify these problems, and discuss them within a design of experiments framework. I argue that problems with MIR evaluation cannot be satisfactorily addressed until the practice installs the formal design of experiments [1]. I also propose new ways to think about what we do, which is very preliminary work.

[1] R. A. Bailey, Design of comparative experiments. Cambridge University Press, 2008.
[2] G. Peeters, J. Urbano, and G. J. F. Jones, “Notes from the ISMIR 2012 late-breaking session on evaluation in music information retrieval,” in Proc. ISMIR, 2012.
[3] B. L. Sturm, “Classification accuracy is not enough: On the evaluation of music genre recognition systems,” J. Intell. Info. Systems, vol. 41, no. 3, pp. 371–406, 2013.
[4] B. L. Sturm, “The state of the art ten years after a state of the art: Future research in music information retrieval,” J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.
[5] J. Urbano, M. Schedl, and X. Serra, “Evaluation in music information retrieval,” J. Intell. Info. Systems, vol. 41, pp. 345–369, Dec. 2013.

[top]


Diemo Schwarz

Diemo Schwarz, IRCAM, Paris
Thursday, February 4, 2016, 3:30pm, Room 52.123

Host: Xavier Serra (MTG)

Title: Tangible and Embodied Interaction on Surfaces, with Mobile Phones, and with Tapioca

Abstract:
This talk will present current work of the Sound, Music, Movement Interaction team (ISMM) at Ircam about various ways to enhance interaction with digital sound synthesis or processing through the use of everyday objects and materials. The interaction is tangible and embodied and makes use of a variety of everyday gestures up to expert gestures. We will focus on three examples:

First, we show how the timbral potential of intuitive and expressive contact interaction on arbitrary surfaces can be enhanced through latency-free convolution of the microphone signal with grains from a sound corpus.

Second, using inertial sensors as found in commodity smartphones to produce sound depending on the devices' motion allows to leverage the creative potential and group interaction of the general public, as shown in the CoSiMa project (Collaborative Situated Media) based on the Web Audio API.

Third, with granular or liquid interaction material as in the DIRTI tangible interfaces, we forego the dogma of repeatability in favor of a richer and more complex experience, creating music with expressive gestures, molding sonic landscapes by plowing through tapioca beads.

[top]


Antonios Lioutas

Antonios Lioutas, CRG, BIOcomuniCA'T
Thursday, February 4, 2016, 12:00 pm, 52.S31

Host: Aurelio Ruiz

Title: Session 2/2: Preparing your scientific poster.

Abstract:
In the second session, we will analyse the structure of scientific posters in light of what we have learned during the first session. We will discuss on how to ameliorate your poster in order to create a scientific poster with a great impact.

Biography:
Antonios Lioutas is currently a postdoc researcher at the CRG. He is a biologist with a PhD in Biomedicine from the UPF (CRG). Alongside his research in the laboratory at the CRG he has formed a science communication agency together with 4 more researchers, www.biocomunicat.com. Apart from being passionate about his main work in the lab, cancer research, he has organized numerous interactive workshops and science dissemination events.

[top]


Nicolas Schweighofer

Nicolas Schweighofer, University of Southern California
Wednesday, February 3, 2016, 4:00 pm, Room 55.410

Host: Paul Verschure (SPECS)

Title: Computational neurorehabilitation: modeling interactions between arm use and function post-stroke

Abstract:
Not available.

Biography:
Not available.

[top]


Francisco Herrera

Francisco Herrera, University of Granada
Thursday, Januray 28, 2016, 12:30 pm, room 52.S31

Host: Xavier Binefa (CMTECH)

Title: Big data preprocessing

Abstract:
Data Preprocessing for Data Mining addresses one of the most important issues within the well-known Knowledge Discovery from Data process. Data directly taken from the source usually are not ready to be considered for a data mining process. Data preprocessing techniques adapt the data to fulfill the input demands of each data mining algorithm. Data preprocessing includes data preparation methods for cleaning, transformation or managing imperfect data (missing values and noise data) and data reduction techniques (feature and instance selection and discretization) which aim at reducing the complexity of the data, detecting or removing irrelevant and noisy elements from the data.

The knowledge extraction process from big data has become a very difficult task for most of the classical and advanced existing techniques. The main challenges are to deal with the increasing amount of data considering the number of instances and/or features, and the complexity of the problem. The design of data preprocessing methods for big data requires to redesign the methods adapting them to the new paradigms such as MapReduce and the directed acyclic graph model using Apache Spark.

In this talk we will pay attention to preprocessing approaches for big data classification. We will analyze the design of preprocessing methods for big data, paying attention to their design for MapReduce paradigm and Apache Spark framework.
References: See the website http://sci2s.ugr.es/BigData

Biography:
Francisco Herrera is a Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada. He has been the supervisor of 38 Ph.D. students and published more than 300 journal papers. He is co-author of the book "Data Preprocessing in Data Mining" (Springer, 2015). He currently acts as Editor in Chief of the international journals "Information Fusion" (Elsevier) and “Progress in Artificial Intelligence (Springer). He acts as editorial member of a dozen of journals. He has been given several awards and honors for his personal work or for his publications in journals and conferences, among others; ECCAI Fellow 2009, IFSA Fellow 2013, 2010 Spanish National Award on Computer Science ARITMEL to the "Spanish Engineer on Computer Science", International Cajastur "Mamdani" Prize for Soft Computing (Fourth Edition, 2010), IEEE Transactions on Fuzzy System Outstanding 2008 and 2012 Paper Award (bestowed in 2011 and 2015 respectively), 2011 Lotfi A. Zadeh Prize Best paper Award of the International Fuzzy Systems Association, 2013 AEPIA Award to a scientific career in Artificial Intelligence (September 2013), and 2014 XV Andalucía Research Prize Maimonides (By the regional government of Andalucía). He belongs to the list of the Highly Cited Researchers in the areas of Engineering and Computer Sciences: http://highlycited.com/ (Thomson Reuters). His h-index is 105 in Scholar Google, receiving more than 41000 citations.

[top]


Simón Lee

Simón Lee, Incubio
Thursday, January 28, 2016, 3:30pm, 55.410

Host: Aurelio Ruiz

Title: Incubio: The Big Data Academy

Abstract:
Incubio is a startup incubator that helps entrepreneurs to develop big ideas. We specialise in early stage projects that use Big Data to offer business processes as a service. This seminar is about incubators and accelerators world and how academics can create a spin-off, starting from the idea based on their research and build a viable product. CEOs and CTOs of two startups (Huballin and SmartMonkey) will share their experiences with the audience and expose their technological challenges in recommendation and massive computation for this 2016.

Biography:
Simon Lee is a creative designer specialized in envisioning, conceptualizing and executing new products and services with a strong technology background always targeting end users. Driven by technology, he is always pursuing the sense of technical improvements and how these affect human interactions. He worked in the award winning Spanish mobile game developer Digital Legends Entertainment (Top100 Most Innovative European Company in 2012 by RedHerring) as Chief Design Officer. Nowadays, he is the managing partner at Incubio.

[top]


Ricardo Marques

Ricardo Marques, UPF-DTIC
Thursday, January 21, 2016, 3:30pm, 55.410

Group: GTI

Title: Spherical Integration for Global Illumination: From Quasi to Bayesian Monte Carlo

Abstract:
One of the most challenging tasks in computer graphics is the synthesis of photo-realistic images given an accurate model of a virtual scene. Such a task requires modeling the physical behavior of light so as to simulate the interactions between light and materials in the virtual scene, and determine how much light enters the virtual camera through which the scene in 'seen'. This process, commonly referred to as the global illumination problem, is mathematically described by the illumination integral. The illumination integral has no analytic solution except for very particular cases in which very strong hypotheses are assumed. To obtain an estimate of the integral value, it is thus common to resort classical Monte Carlo techniques, which converge slowly and require a large amount of samples for producing good quality results. However, more evolved Monte Carlo techniques can be used to obtain better quality results using the same number of samples, hence accelerating the process of synthesizing an image. In this talk I will present two of these techniques, based on my recent works on Quasi-Monte Carlo and Bayesian Monte Carlo spherical integration for global illumination.

Biography:
Ricardo Marques received his MSc degree in Computer Graphics and Distributed Parallel Computation from Universidade do Minho, Portugal, (fall 2009), after which he worked as a researcher in the same university. He joined INRIA (Institut National de Recherche en Informatique et Automatique) and the FRVSense team as a PhD student in the fall 2010 under the supervision of Kadi Bouatouch. His thesis work has focused on spherical integration methods applied to light transport simulation. He defended his PhD thesis in the fall 2013 and joined the Mimetic INRIA research team as a research engineer in 2014, where he worked in the field of Crowd Simulation. In the fall 2015 he joined the Interactive Technologies Group (GTI) of Universitat Pompeu Fabra (UPF) in Barcelona.

[top]


Gwanggil Jeon

Gwanggil Jeon, Incheon National University, Korea
Tuesday, January 19, 2016, 3:30pm, 55.410

Host: Marcelo Bertalmío (IP4EC)

Title: Color Demosaicking in Frequency Domain

Abstract:
This talk addresses the problem of interpolating missing color components at the output of a Bayer color filter array (CFA), a process known as demosaicking. In the first part, a luma–chroma demultiplexing algorithm is presented in detail, using a least-squares design methodology for the required bandpass filters. A systematic study of objective demosaicking performance and system complexity is carried out, and several system configurations are recommended. The method is compared with other benchmark algorithms in terms of CPSNR and S-CIELAB objective quality measures and demosaicking speed. In the second part, a model is presented for the noise in white-balanced gamma-corrected CFA images. A method to estimate the noise level in each of the red, green, and blue color channels is then developed. Based on the estimated noise parameters, one of a finite set of configurations adapted to a particular level of noise is selected to demosaic the noisy data. The noise-adaptive demosaicking scheme is called LSLCD with noise estimation (LSLCD-NE). Experimental results demonstrate state-of-the-art performance over a wide range of noise levels, with low computational complexity.

Biography:
Gwanggil Jeon received the BS, MS, and PhD (summa cum laude) degrees in Department of Electronics and Computer Engineering from Hanyang University, Seoul, Korea, in 2003, 2005, and 2008, respectively. From 2008 to 2009, he was with the Department of Electronics and Computer Engineering, Hanyang University, from 2009 to 2011, he was with the School of Information Technology and Engineering (SITE), University of Ottawa, as a postdoctoral fellow, and from 2011 to 2012, he was with the Graduate School of Science and Technology, Niigata University, as an assistant professor. From 2014 to 2015, he was a visiting scholar in ENS-Cachan, France. He is currently an assistant professor with the Department of Embedded Systems Engineering, Incheon National University, Incheon, Korea. Since 2016, he was selected to the Young Thousand Talents Program and works as a Full professor in Xidian University, China. His research interests fall under the umbrella of image processing, particularly image compression, motion estimation, demosaicking, and image enhancement as well as computational intelligence such as fuzzy and rough sets theories. He was the recipient of the IEEE Chester Sall Award in 2007 and the 2008 ETRI Journal Paper Award.

[top]


Antonios Lioutas

Antonios Lioutas, CRG and BIOcomuniCA'T
Thursday, January 14, 2016, 3:30pm, 55.410

Host: Aurelio Ruiz

Title: Session 1/2 : How to prepare a good scientific poster.

Abstract:
During this seminar we will discuss the power of a scientific poster as a vehicle to pass along scientific knowledge, achievements and hypotheses.

  • The purpose of scientific posters
  • Elements of a poster
  • Common pitfalls and golden rules
  • How to prepare and present your poster

Biography:
Antonios Lioutas is currently a postdoc researcher at the CRG. He is a biologist with a PhD in Biomedicine from the UPF (CRG). Alongside his research in the laboratory at the CRG he has formed a science communication agency together with 4 more researchers, www.biocomunicat.com. Apart from being passionate about his main work in the lab, cancer research, he has organized numerous interactive workshops and science dissemination events.

[top]


Jesus Alonso-Zarate

Jesus Alonso-Zarate, CTTC
Thursday, December 3, 2015, 3:30pm, Room: 52.S29

Host: Boris Bellalta (WN)

Title: The Internet of Things: A Brave New World

Abstract:
We are approaching the end of the world as we know it. With the connection of everyday objects to the Internet, a new revolution is to come. In this talk, the concept of the Internet of Things will be introduced, identifying key applications, business opportunities, technologies, and research challenges that have to be faced before the new era of the connected things becomes a reality. Emphasis will be given to the research and demo activities conducted at CTTC, with the aim of identifying synergies and foster collaboration in the future.

Biography:
Jesus Alonso-Zarate is Senior Researcher and Head of the Machine-to-Machine Communications Department at the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC). He has more than 10 years of experience conducting R&D projects in the area of the Internet of Things (IoT). More info at: www.jesusalonsozarate.com.

[top]


Daniel Wolff

Daniel Wolff, UPF-DTIC
Thursday, November 26, 2015, 3:30pm, 52.S31

Host: Emilia Gomez (MTG)

Title: Spot The Odd Song Out: Similarity models in analysis of corpora and listener groups

Abstract:
The concept of similarity can be applied to music in a multitude of ways. Applications include systems which provide similarity estimates depending on the specific user and context as well as analysis tools that show similarity of music with regards to specified compositional, physical or contextual features. The ASyMMuS project allows musicologists to apply similarity analysis to musical corpora on a big-data infrastructure - allowing for a comparison of e.g. the works of a certain composer.

For analysis of music reception, perceived similarity is of interest. It is specific to individuals and influenced by a number of factors such as cultural background, age and education. We will discuss how to adapt similarity models to the relative similarity data collected in the game with a purpose "Spot The Odd Song Out" (http://mirg.city.ac.uk/casimir/game). Models are parametrised with the help of machine learning technique. Experiments show that models can be adapted to this very general similarity data - depending on the amount of data available. With transfer learning, methods can be used on smaller, e.g. group-specific datasets, which we utilise for a comparative analysis of similarity models between different user groups.

Biography:
Currently visiting researcher at the MTG and working with City University London and TIDO, Daniel Wolff's research interests focussed on audio analysis, music similarity and sound as a cultural phenomenon. Recent research projects include the Digital Music Lab - providing a big data infrastructure for musicology and his PhD thesis on "Similarity Model Adaptation and Analysis using Relative Human Ratings" at the Music Informatics Group of City University London. His past research includes (tempo) feature extraction from audio with a focus on periodic patterns at Bonn University, as well as computational bioacoustics with a focus on birdsong recognition. He is currently an active musician and past projects include the City University Experimental Music Ensemble (https://goo.gl/M8Kfaz).

[top]


Olivier Coulon

Olivier Coulon, Aix-Marseille University
Thursday, November 19, 2015, 4:30 pm, 52.329

Host: Miguel Angel Gonzalez Ballester (SIMBIOSYS)

Title: Organization and variability of the cerebral cortex: quantification and modeling

Abstract:
The cerebral cortex as it can be observed in neuroimaging shows a large apparent variability across subjects. Such variability is an obstacle to the inter-subject matching that is necessary for all neuroimaging group studies, and limits possibilities to define intervals of normality and therefore detect abnormal characteristics associated to pathologies. In the context of neuroimaging, and in particular of Magnetic Resonance Imaging (MRI) acquisitions, macro-anatomy of the cortex can be observed and described in terms of folds, sulci, and gyri, which are highly variable and to date it is still a debate if these landmarks are representative of cortical organisation and architecture. This talk will present here different image and data analysis methods that are used to study cortical organization and variability In particular it will show how cortical organization and variability can be modeled and quantified from the global to the local level, how this variability can be related to functional organization, and what we can expect of imaging micro-structure in-vivo.

Biography:
Olivier Coulon is a CNRS Director of Research at Aix-Marseille University and the head of the Methods and Computational Anatomy team (MeCA, http://www.meca-brain.org) at the Timone Neuroscience Institute in Marseille, France. After a Master of Engineering at Telecom ParisTech (Paris, France) in 1994 he graduated a PhD in brain image analysis at Telecom ParisTech in 1998. He then moved to University College London (UK) where he spent three years working on spinal cord shape analysis and diffusion tensor imaging. In 2001 he was recruted as a CNRS researcher at Aix-Marseille University. His research interests are the quantification and modelling of cortical organization and variability, and the quantification of cortical development. He also leads the development of the Cortical Surface toolbox in the BrainVisa software (http://brainvisa.info), through which most of the methods that are developed in the MeCA team are implemented.

[top]


Jose Vicente Manjón Herrera

Jose Vicente Manjón Herrera, UPV
Friday, November 13, 2015, 11:30 am, Room 52.S27

Host: Miguel Angel Gonzalez Ballester (SIMBIOSYS)

Title: Applications of non-local medical image processing

Abstract:
Medical images contain a lot of redundancy (as natural images do). This property has been recently exploited by non-local image processing techniques for many different applications. The main observation is that, images can be accurately represented as a non-local combination of a reduced set of image patches. In this talk I will present recent advances and research directions in the field of non-local patch-based medical image processing in our research group IBIME (Universidad Politecnica de Valencia). Specifically, I will show applications of this technology to the problems of noise removal, super-resolution, segmentation and computer-aided diagnosis.

[top]


Seiji Isotani

Seiji Isotani, University of Sao Paulo
Thursday, November 12, 2015, 3:30pm, 52.S31

Host: Davinia Hernandez-Leo (GTI)

Title: Advancements in Intelligent Support for Collaborative Learning

Abstract:
Computer-Supported Collaborative Learning (CSCL) is an area of research that investigates how collaboration can be enhanced by technology to support effective interactions among students and promote robust learning. Although the research community has provided several evidences showing the benefits of using CSCL in classroom, recent findings indicate that over time students may be demotivated to participate in group work. There are several reason for that such as (i) lack of affective support and understanding in current CSCL environments; (ii) problems to provide adequate feedback for individuals while working in groups; (iii) poor design and evaluation of group activities; and so on. In this talk, we will discuss several benefits of collaboration to support robust learning based on strong evidence-based research. Then, we will show challenges to understand the role of group formation, activity design and interaction analysis to support the creation of intelligent environments to support collaborative learning. Finally, we will show how emotions, culture and new technologies are also playing an important role for understanding collaboration and providing more evidences to the create intelligent CSCL environments to (i) form groups adequately; (ii) design well-thought-out collaborative learning scenarios; and (iii) analyze and track the benefits of group learning.

Biography:
Seiji Isotani is an Educational Technologist, Scientist and Innovator. He holds the positions of Associate Professor of Computer Science and Co-Director of the Applied Computing in Education Laboratory at the University of São Paulo. He is also the co-founder of two startup companies that have won several innovation awards in the field of education and semantic technology. His main research topics are: Intelligent Tutoring Systems (ITS), Computer-Supported Collaborative Learning (CSCL), Ontologies and Semantic Web, Dynamic/Interactive Geometry, and technology-enhanced learning. Isotani’s research group focuses on understanding how computational technologies can be designed and improved to create smart learning environments that help students to achieve robust learning.

[top]


Ichiro Fujinaga

Ichiro Fujinaga, McGill University
Thursday, November 5, 2015, 3:30pm, 52.S31

Host: Xavier Serra (MTG)

Title: Single Interface for Music Score Searching and Analysis Project

Abstract:
A thousand years of print and manuscript music sits on the shelves of libraries and museums around the globe. While on-line digitization programs are opening these collections to a global audience, digital images are only the beginning of true accessibility since the musical content of these images cannot be searched by computer. The goal of the Single Interface for Music Score Searching and Analysis project (SIMSSA: http://simssa.ca) is to teach computers to recognize the musical symbols in these images and assemble the data on a single website, making it a comprehensive search and analysis system for online musical scores. Based on the optical music recognition (OMR) technology, we are creating an infra-structure and tools for processing music documents, transforming vast music collections into symbolic representations that can be searched, studied, analyzed, and performed.

[top]


Jeffrey C. Smith

Jeffrey C. Smith, CEO and Co-Founder, Smule; Assistant Consulting Professor, CCRMA, Stanford
Monday, November 2, 2015, 1:15 pm, room 52.S27

Host: Xavier Serra (MTG)

Title: User Engagement with Data Science

Abstract:
New and significant repositories of musical data afford unique opportunities to apply data analysis techniques to ascertain insights of musical engagement. These repositories include performance, listening, curation, and behavioral data. Often the data in these repositories also includes demographic and/or location information, allowing studies of musical behavior, for example, to be correlated with culture or geography. Historically, the analysis of musical behaviors was limited. Often, subjects (e.g. performers or listeners) were recruited for such studies. This technique suffered from issues around methodology (e.g. the sample set of subjects would often exhibit bias) or an insufficient number of subjects and/or data to make reliable statements of significance. That is to say the conclusions from these studies were largely anecdotal. In contrast to these historical studies, the availability of new repositories of musical data allow for studies in musical engagement to develop conclusions that pass standards of significance, thereby yielding actual insights into musical behaviors. This talk will demonstrate several techniques and examples where correlation and statistical analysis is applied to large repositories of musical data to document various facets of musical engagement.

Biography:
Jeff started his career as a software engineer at IBM's Scientific Research Center in Palo Alto, and after writing software for several companies, eventually cofounded a consumer business in electronic publishing that he sold to Novell/WordPerfect. Jeff took his second company public on the Nasdaq ('TMWD'), which he grew from inception to several thousand enterprise customers. Jeff's third company, which he cofounded and where he assumed a non-operating role as a board member, was acquired by Google/Android. Jeff co-founded Smule while pursuing a Ph.D in Computer Music at Stanford and serves as the CEO and Chairman of the Board. He recently completed his Ph.D., “Correlation analyses of encoded music performance”, where he documented cultural differences of music performance interpretation. Jeff previously received a B.S. in Computer Science at Stanford University. He has co-authored twenty-seven patents. Jeff is also an assistant consulting professor in the Department of Music (CCRMA) at Stanford University where he teachers a graduate seminar on the study of music engagement through correlation and statistical analysis.

COMPANY SUMMARY: smule of san francisco, california is the leading developer of mobile music applications. smule's applications, including sing! karaoke, magic piano, ocarina, autorap, etc., have over 30M unique monthly active users. smule users perform 12M songs and store over 3 terabytes of user content on the smule network each day.

[top]


Ana Baiges

Ana Baiges, UPF Library Services
Thursday, October 29, 2015, 3:30pm, 52.S31

Host: Aurelio Ruiz

Title: Scientific Dissemination, Online Repositories, and Author's Rights

Abstract:
The purpose of the meeting is to explain the services that the Library offers to UPF researchers and graduate students in the preparation of scientific publications, and the aspects to take into account in their dissemination. A particular emphasis will be placed on intellectual property issues on the compliance with European and Spanish policies on open access (Horizon 2020, Plan Estatal ICTI) and an overview on author profiles and affiliation.

[top]


Luz Rello

Luz Rello, Carnegie Mellon University
Thursday, October 22, 2015, 3:30pm, 52.S31

Host: Ricardo Baeza-Yates (WRG)

Title: Change Dyslexia: Early Detection and Intervention at Large Scale

Abstract:
More than 10% of the population has dyslexia, and most are diagnosed only after they fail in school. My work is changing this through scalable early detection and tools that help people with dyslexia read and write better. To detect dyslexia, I am developing machine learning models that predict reading and writing difficulties by watching how people interact with my web-based game Dytective. My experiments have revealed differences in how people with dyslexia read and write, and I have developed a series of tools that integrate these results to help people with dyslexia read and write better. These tools are used by tens of thousands of people, which apart from supporting users, also serve as living laboratories in which to develop and prove techniques for detection and intervention. Moving forward, we are working with schools to put our approach into practice at scale to finally eliminate school failure as a primary way dyslexia is diagnosed.

Biography:
Luz Rello is a Post Doctoral Fellow at Carnegie Mellon University in the Human-Computer Interaction Institute. She is also an Ashoka Fellow, invited expert to W3C-WAI and co-founder of the Cookie Cloud team that creates applications from research results. She holds a degree in Linguistics (Complutense University of Madrid), a MSc in Natural Language Processing (University of Wolverhampton) and a Ph.D. in Computer Science (Universitat Pompeu Fabra). She has received a number of awards, including the MIT Technology Review ‘Innovators under 35 Award Spain’ (2014) and the European Young Researchers' Award (2013) for her work on applying technology for dyslexia using Linguistics, HCI and NLP. Her IDEAL eBook reader and Dyseggxia game (Vodafone Foundation Mobile for Good Europe Awards 2013) have received nearly one hundred thousand downloads in more than 70 countries. Currently, she is working to detect dyslexia at large scale and on bringing this work to society via her non-profit organization Change Dyslexia.

[top]


Ye Wang

Ye Wang, National University of Singapore
Monday, October 19, 2015, 3:30 pm, room 55.410

Host: Rafael Ramirez (MTG)

Title: Sound, Music and Sensor Computing for Health and Wellbeing.

Abstract:
The use of music as an aid in healing body and mind has received enormous attention over the last 20 years from a wide range of disciplines, including neuroscience, physical therapy, exercise science, and psychological medicine. We have attempted to transform insights gained from the scientific study of music and medicine into real-life applications that can be delivered widely, effectively, and accurately. We have been trying to use music in evidence-based and/or preventative medicine. In this talk, I will present some of our recent and ongoing projects which facilitate the delivery of established music-enhanced therapies, harnessing the synergy of sound and music computing (SMC), mobile/sensor computing, and cloud computing technologies to promote healthy lifestyles and to facilitate disease prevention, diagnosis, and treatment in both developed countries and resource-poor developing countries.

Biography:
Ye Wang is an Associate Professor in the Computer Science Department at the National University of Singapore (NUS) and NUS Graduate School for Integrative Sciences and Engineering (NGS). He established and directed the sound and music computing (SMC) Lab. Before joining NUS he was a member of the technical staff at Nokia Research Center in Tampere, Finland for 9 years. His research interests include sound, music and sensor computing, mobile computing, and cloud computing, and their applications in music/language learning and e-Health, as well as determining their effectiveness via subjective and objective evaluations. His most recent projects involve the design and evaluation of music recommender systems to enhance: 1) therapeutic gait training using Rhythmic Auditory Cueing (RAC), 2) exercising, and 3) learning English as a second language. His current project also includes the development of novel wearable sensors to quantify human mobility (e.g., gait).

[top]


Julian O’Kelly

Julian O’Kelly, Royal Hospital for Neuro-disability, UK
Wendesday, October 14, 2015, 3:30 pm, room 55.410

Host: Rafael Ramirez (MTG)

Title: Music Therapy with Prolonged Disorders of Consciousness: in Retrospect and Prospect

Abstract:
Music therapists have been researching the effects of music therapy with those in coma, vegetative and minimally conscious states for over thirty years. This talk will provide an overview of the development of contrasting music therapy approaches during this period, arguing the case for more standardised approaches using concensus nomenclature, and greater dialogue with neuroscience going forward (O’Kelly & Magee 2013). This perspective has informed the development of two neurophysiological and behavioral studies exploring the effects of music therapy in the assessment (O’Kelly et al. 2013) and rehabilitation (in progress) for those with prolonged disorders of consciousness (PDOC). Preliminary findings will be presented from a cross-over study comparing the rehabilitative and prognostic potential of music therapy to preferred text narration, using a range of neurophysiological measures (EEG, heart rate variability, respiration) and video time sampled behavioral data. Findings will be discussed in relation to other models of practice, the complexity of the field of research, and the potential of music therapy as a tool for revealing what intact brain network activity exists in those with PDOC. Reflections will be provided on the relevance of these findings to the UK model of neuro-rehabilitation and the sustainability of music therapy in this competitive market.

Biography:
Julian combines his Fellowship at the Royal Hospital for Neuro-disability (RHN) with the post of Clinical Services Manager with Chroma, a leading UK Arts Therapies provider. Previously, as PhD Fellow with Aalborg University, he published several papers from his research on music therapy with disorders of consciousness, giving presentations in America, Canada and across Europe. He is organiser of the 2016 Winter Meeting of the Society for Rehabilitation Research, an expert panel member for the NHS AHP Clinical Expert Database and Associate Editor of a topic for ‘Frontiers in Human Neuroscience’ titled ‘Dialogues in music therapy and music neuroscience’.

[top]


Yasuko Sugito

Yasuko Sugito, NHK (Japan Broadcasting Corporation)
Wendesday, October 14, 2015, 12:30 pm, room 52.S25

Host: Marcelo Bertalmio (IP4EC)

Title: HEVC/H.265 Codec System and Transmission Experiments aimed at 8K Broadcasting

Abstract:
8K Super Hi-Vision (8K) is a broadcasting system capable of highly realistic 8K Ultra High Definition Television (UHDTV) video and 22.2 multichannel audio. In Japan, domestic standards for 8K broadcasting were formulated in 2014 and 8K test broadcasting will begin in 2016. We have developed the world's first 8K High Efficiency Video Coding (HEVC)/H.265 codec system that complies with the domestic standards. In this presentation, I will firstly talk about the features and a roadmap for 8K broadcasting. Then I will introduce the features of the 8K HEVC/H.265 codec system developed. Finally, I will explain transmission experiments using a satellite system that is equivalent to the one that will be used in test broadcasting. The results allowed us to confirm that the developed system provides high-quality transmission at the expected bit rate during test broadcasting.

Biography:
Yasuko Sugito is currently with NHK (Japan Broadcasting Corporation) Science & Technology Research Laboratories, Tokyo, Japan, researching video compression algorithms and image processing on 8K Super Hi-Vision. Her current research interests focus on image quality improvements and speeding ups for high efficiency video coding (HEVC) especially at high resolution images.

[top]


Anders Jonsson

Anders Jonsson, UPF-DTIC
Tuesday, September 29, 2015, 12:30 pm, 55.410

Group: AIG

Title: Lifelong Sequential Decision Making

Abstract:
In artificial intelligence, sequential decision making is the problem of an agent or system repeatedly deciding which action to perform in order to achieve a certain objective. In the past decades, the area of sequential decision making has seen significant progress, and today, dedicated algorithms can solve sequential decision problems with thousands of variables and actions. Almost exclusively, however, state-of-the-art algorithms are evaluated in static environments on a single objective, and the knowledge gained is discarded when an algorithm terminates. In lifelong sequential decision making, on the contrary, the environment is dynamic and the objective may change over time. An agent acting in such an environment has to successively incorporate new information and accumulate knowledge over its lifetime in order to refine and improve its capabilities. In this talk I will discuss the components of lifelong sequential decision making and describe several potential applications.

Biography:
Anders Jonsson is a lecturer in the Department of Information and Communication Technologies at Universitat Pompeu Fabra. He received his Ph.D from the University of Massachusetts Amherst in 2005, working in reinforcement learning with Prof. Andrew Barto. His research centers on decision problems, especially sequential decision problems in which one or several agents have to make repeated decisions about what to do. His current research interests include hierarchical decompositions of sequential decision problems into subtasks, multiagent formulations of sequential decision problems, temporal formulations in which actions have a duration, analyzing the computational complexity of different classes of problems, and applying machine learning to medical research. Anders has taught over 20 courses at the undergraduate and masters level and is the winner of the 2015 UPF Board of Trustees Prize for quality in teaching.

[top]


Constantine Butakoff

Constantine Butakoff, UPF-DTIC
Friday, October 2, 2015, 12:30, 55.309

Group: PhySense

Title: Computational techniques for cardiac structure representation and analysis

Abstract:
In this seminar I will talk about the research I carried out in the past five years. The overall work revolved around the extraction and representation of the detailed cardiac anatomy and establishing computational tools for the analysis of the cardiac shape and structure variability.

Biography:
Since 2010 Constantine Butakoff is a lecturer at DTIC and a member of the PhySense research group. He was born in Uzhhorod, Ukraine, obtained a degree in mathematics at the National University of Uzhhorod in 1999. In 1999-2003 he worked in the research department of the Neural Networks Technologies Ltd. (Tel-Aviv, Israel, does not exist anymore) developing methods for image filtering, pattern recognition and the VEGA Image Processing System. In 2009 he got a Ph.D from I3A, Universidad de Zaragoza (Spain). His current research is focused on shape and structure modeling and parametrization with applications in biomedicine. In particular he is interested in developing computational geometry methods to characterize the objects with complex geometry and establish point correspondences between them.

[top]

























e