Bio: Antonio J. Jara (General Manager Smart Cities at Libelium), chair of Data Quality and IoT in IEEE. He did his PhD (Cum Laude) at the University of Murcia (UMU), Spain. These PhD results present a novel way to connect objects to Internet-enabled platforms in an easy, secure and scalable way. He also carried out a MBA and entrepreneurship formation in the ENAE business school and UCAM (2012). He received entrepreneurship awards from ENAE (sponsored by SabadellCAM financial services), emprendeGo (sponsored by Spanish government), IPSO Alliance Award (Sponsored by Google) for its disruptive innovation in the IoT, selected and mentored by the acceleration program FIWARE. Antonio Jara as part of HOPU is focused on the Smart Cities market with solutions for citizens engagements and environmental monitoring (air quality sensors). Antonio Jara has also participated in over 100 international events about Internet of Things as Speaker, over 100 international publications / papers (~5000 citations and impact factor h=37), he holds several patents in the IoT domain and finally he has advised in the IoT domain to companies such as Microsoft and Fujitsu.
Abstract: Technologies and standards are leading a common integration ecosystem, starting with APIs and common information systems such as FIWARE (ETSI NGSI-LD) and with the creation of data models (smart data models) that unify and define a vendor-independent approach to the description of the information, have made it possible to achieve the ambition of local testing and global exploits, promoting high replicability, at a low cost; culminating in the ability to transfer experiences and successful solutions between cities. The next step will be to also be able to transfer knowledge, a reality that before common challenges such as Low Emission Zones we will be able to see as being able to learn collaboratively, reaching tactical urbanism of cooperation and cross-city impact.
Abstract. This joint Keynote Speech focuses on agent modeling for multimodal interactions of both humans and artificial agents. Human interaction often happens through different modalities such as movements, facial expressions, and verbal utterances. These multimodal human interactions often become attuned to each other. For instance, partial mimicry of movements and facial expressions emerges. Such types of attunement are also indicated by the term interpersonal synchrony, which can occur for different modalities. Interpersonal synchrony usually results in increased behavioral adaptivity. Such behavioral adaptivity encompasses a range of behavioral outcomes, from better cooperation to increased liking or bonding. Since the link between interpersonal synchrony and behavioral adaptivity is an overall mechanism that arises automatically, for a human feel it is recommendable to build it into human-computer interaction as realistically as possible. Our agent models, based on an adaptive network-oriented modeling approach, offer an adequate tool to simulate and analyse these emergent processes, and can therefore provide a good basis for adaptive human-like virtual agents in various contexts.
Bio: Rodrigo da Rosa Righi is a Senior Member at IEEE and a Senior member at ACM, being also a professor and researcher at the University of Vale do Rio dos Sinos (Unisinos), Brazil. Today, he is the coordinator of the Applied Computing Graduate Program, Master and Ph.D., at this university. Rodrigo concluded his post-doctoral studies at KAIST - Korean Advanced Institute of Science and Technology, South Korea, under the following topics: IoT and cloud computing. He obtained his MS and Ph.D. degrees in Computer Science from the Federal University of Rio Grande do Sul, Brazil, in 2005 and 2009, respectively. He is the coordinator of national and international projection in the topics of: resource management in distributed systems, fog and cloud computing, Industry 4,,.0 and Artificial Intelligence. His research interests include performance analysis, predictive maintenance, event prediction and correlation, cloud and fog resource elasticity, and microservices to enable the next generation of mobile communication (5G).
Abstract: Fog computing architectures are gaining popularity as an alternative to provide low latency communication on executing distributed services. With cloud resources, it is possible to assemble an architecture with resources that reside close to data providers and those with more processing capacity, which are achieved through Internet links. In this context, this talk aims to present first insights regarding a fog-cloud architecture for the healthcare area. In particular, we address vital sign monitoring in sensor devices and provide intelligent health services that reside both in the fog and in the cloud to offer benefits for the end-users and public government. The preliminary results show the advantages of combining fog and cloud and critical applications and highlight some points of attention to address system scalability and quality of service.
Bio: Chintan Amrit is an Associate Professor at the Department of Operations Management, at the University of Amsterdam. He has completed his Ph.D. from the University of Twente in the area of Coordination in Software Development, having started it at RSM Erasmus University. He holds a master’s degree in Computer Science from the Indian Institute of Science, Bangalore. In the past, he has worked for three years as a software engineer. His research interests are in the area of business intelligence (using machine learning), open-source development, and mining software repositories. His work has been accepted in venues such as Journal of Information Technology, Decision Support Systems, Information, and Software Technology, International Journal of Production Research, Social Science Computer Review, Information Systems Management, Journal of Systems and Software, IT Professional, Journal of Software Evolution and Process, Environmental Modelling & Software among others. He serves as a coordinating editor of Information Systems Frontiers journal, an associate editor of PeerJ CS journal, and is a regular track chair at ECIS.
Estimates of caseloads of wasting are essential for prioritizing the most vulnerable areas and populations at risk of malnutrition. In this talk, I present the work we undertook in collaboration with the UN World Food Programme's (WFP) Regional Bureau of Dakar (RBD) to estimate the burden of wasting in the Sahel region of Africa. I describe the steps we undertook to analyze the current Hotspot analysis used to estimate the burden, the data gathering and processing steps we took, as well as the three machine learning models we built to estimate the burden using both new features and those that are previously considered. I then compare the efficacy of the models to predict the severity of acute malnutrition and assess the model performance in comparison to the current Hotspot approach used by WFP. I will also describe the analysis performed to identify the most important features that influence the predictions for each model and their commonalities across models.
Bio: Irish Singh is a researcher at Knowledge Intensive Software Engineering (NiSE) Lab and before that She did her Ph.D. in the Department of Computer Engineering, from Ajou University, South Korea. Before that, she did a Master’s degree in Computer Engineering, from the Birla Institute of Technology, India, and a Bachelor’s degree in Computer Science and Engineering from Uttar Pradesh State Technical University, India. Her research interests are Connected Minds, Adaptive Security, Cloud Networks, Blockchain Technology, Software Engineering, Requirement Engineering, Human-Computer Interaction.
Abstract: Human Computer Interaction (HCI) Systems is the two-way communication and information exchange between the application stakeholders and the HCI applications. Self-adaptive security requirements engineering helps to elicit the security needs and requirements for individual application stakeholders as well as the needs and requirements for the HCI system. In recent time several security vulnerabilities have been reported in the HCI applications. Secondly, the HCI applications are not developed for secure and successful interaction with the application stakeholders. Secure HCI is the study of how stakeholders securely interact with the HCI system and to what extent self-adaptive security requirements engineering process can be used by HCI application informatics to understand major elements of security and usability. The process also helps to maintain the HCI informaticists design applications secure, effective, efficient & user friendly to ensure customers satisfaction.
Bio: Dr. Koumudi Patil is an Associate Professor in the Department of Design, Indian Institute of Technology Kanpur, India. She works in the area of interactive and inclusive design of pedagogies for school curriculum and the creative industry in the Indian developmental context.
Abstract: The author has coined the term 'Digital Desis' to include early adopters within the category of Digital Immigrants who fundamentally think and process information differently than digital natives. In formal educational institutions, Desis are submitted to pedagogical regimes based on decontextualised knowledge and skill sets seeped in unplugged technologies. Often, their performance is less than average in the classrooms. However, outside school, the same individual seamlessly navigates complex and multimodal context-specific environments. Several acclaimed projects, targeting the informal learning environment, such as One Laptop per Child and the Hole-in-the-Wall initiative, aim to increase the access of computers and other plugged activities to Digital Desis. Even though the context in which these projects operate is informal, their approach is often overly techno-centric, ignoring the significant role that local contexts play in shaping ICTs.
Despite the significance of ICTs, schools in developing countries continue to focus on imparting abstract procedural knowledge through unplugged tools and less on application of this knowledge in the real world. Children are not explicitly taught how to link abstract or symbolic content with the real world, or vice versa.
Therefore, this paper attempts to unfold the barriers and enablers of e-tools to integrate the informal learning pedagogy of Digital Desis through the case study of an online educational game. The digital game was designed and developed by the author to integrate informal learning contexts on a digital platform deployed in formal educational institutions. A Pair T-test was employed on the pre and post test data of 30 students who used the game for one month. The subjects had little or no knowledge and access of computers before they participated in this study. 450 hours of video footage was analysed to understand the process of knowledge acquisition on computers by young Digital Desis, and its impact on their learning trajectory. The post-test scores were significantly higher than the pre-test scores, and informal knowledge acquired out-of-school was applied fluently by students in the educational game.
Bio: Dr. Mukesh Saini has 12 years of experience in video processing and data fusion. He obtained Master of Technology (M. Tech) in Electronics Design and Technology from Indian Institute of Science (IISc), Bangalore, in 2006 and PhD in Computer Science from School of Computing, National University of Singapore, Singapore in 2012. He worked as post-doctoral researcher at National University of Singapore, University of Ottawa, and New York University. In recent years, he has focused more in information systems that exploit multimodal data, particularly video audio and text in application areas of smart classrooms, social network analysis, multimedia surveillance, and automatic video mashups.
Abstract: Video is one of the key mediums for computer-to-human interaction. Video presentations are used to communicate information to the users. Currently, multiple high-resolution cameras are used to capture the same event. Displaying videos in raw format is boring; it may not effectively meet the objectives of video presentation. I will discuss various methods and challenges of creating a mashup of these videos to meet the desired objective. Specifically, I will discuss online video mashup approaches for surveillance and social network sharing.
Bio: Dr. Jan‐Willem van ’t Klooster is director of the BMS Lab, the innovation lab of the faculty of Behavioural, Management and Social Sciences at the University of Twente, The Netherlands. This lab consists of over a dozen of experts, 16 lab rooms, a mobile lab, various large scale research software products and > 15 assistants. He has coordinated multiple national and European projects and work packages, including EFRO, ZonMw and EIP AHA research projects. Grants won as (co)applicant include EFRO, euregional, and work for the Dutch Ministry of Health. Jan-Willem obtained his PhD in health informatics from the University of Twente, The Netherlands, in 2013. After that he worked as a product owner at Nedap, a large electronic healthcare corporation, and as project manager for developing health‐related self management services at Roessingh Research and Development, the largest telemedicine and rehabilitation institute in The Netherlands. He has taught computer science topics and was section chair computer science at the Bonhoeffer College for 11 years, was involved in nationwide ICT curriculum design for SLO, and peer reviews for various ehealth related journals and conferences, including JMIR and Plos One. Since 2019, he is director of the faculty innovation lab BMS Lab.
Abstract: How well do humans function in interaction with high tech systems? A car is full of sensors that measure how well the car operates. If the tire pressure is too low a warning light goes on; if the catalyser defect this happens as well. And so there are many systems that warn in case of potential issues.
In complex systems, even more sensors check the condition of the system and optimise it. However, this does not guarantee optimal functioning. Because the human in the equation is oftentimes a critical component. They take (good or bad) decisions, and communicate or miscommunicate. How about the functioning of the human component in these kinds of systems? Is she/he still alert, tired or overburdened?
In our BCI testbed, we test these kinds of questions using brain measurements, in different (ambulant) conditions. We do this together with companies in the region and large enterprises. In this tutorial, we will cover different aspects of measuring human functioning, including the possibilities and challenges of BCIs (brain computer interfaces). And give appealing examples and research opportunities to measure, to know and to improve!
Bio: Nagarajan Prabakar is an Associate Professor in the School of Computing and Information Sciences at Florida International University. He received his Ph.D. from the University of Queensland in Database Systems. He developed a scheme to access a vast amount of spatial data from a semantic database and fly over the data in real-time – this emerged as TerraFly software from the High Performance Database Research Center, FIU. He has also designed dynamic mosaicking algorithms for spatial images and integrated vector GIS data with spatial data sets. Currently, he is working on quantum algorithms, security models for cloud storage, and advanced machine learning for cyber physical systems.
Abstract: What salient features of quantum computing establish quantum supremacy? How does the wave function of quantum states and the probabilistic measures of results impact the deployment of this emerging technology? We will present a glimpse of scenarios to answer these questions and describe the role of HCI in the design and use of quantum systems.
Bio: Muhammad Taqi Raza joined the University of Arizona's information systems department in 2019, soon after earning his PhD in computer science from UCLA. His research interests broadly lie in the area of networked systems and security. His research contributes to better understanding of the state-of-the-art networked systems by challenging their operational efficacy and identifying unexplored aspects at their heterogeneity—providing simple and innovative solutions from system designs to their operations through verification.
Abstract: Today 5G mobile networked enable human and machine interaction at massive scale. To enable secure human-machine interaction 5G systems have built-in security mechanisms that protect against disclosure of information exchanged between users, machines, and the network. Despite these existing security mechanisms, I will show that an attacker is still capable of eavesdropping on users' interaction with machines, impersonating a user by forging packets, and causing service outage. My key finding is that the attacker breaks 4G/5G encryption and integrity protection without relying on the knowledge of security key. Motivated by these attacks, I advocate for an efficient and exhaustive vulnerability analysis on 4G/5G systems to discover security loopholes previously unknown. In this talk, I will demonstrate how we can build systems and design algorithms that can extract new vulnerabilities and enable exhaustive security analysis in a polynomial time. Further, I will discuss how this approach provides a new dimension for jointly solving security and availability problems in various related fields including Machine-to-Machine communication, multimedia subsystems, and network analytics.
Abhishek Shrivastava, Ph.D., is a faculty of Human Computer Interaction at the Department of Design, IIT Guwahati. In addition, he is chairing the steering committee of HCI Professional Association of India (HCIPAI) since February 2022. For over a decade now, his research focuses on user interaction issues across diverse technology deployments and end-user applications involving Voice Agents and Voice User Interfaces. During his Ph.D. dissertation, he conducted studies aimed at evaluating the role of spoken language in improving user performance and subjective satisfaction. He has lead research, design and development of (speech-based) assistive tools for specially-abled children. These tools, in close-to-real-deployment scenarios, have been tested at All India Institute of Speech and Hearing (AIISH Mysuru) by trained Speech Language Pathologists. He is the recipient of prestigious research grants from Ministry of Education, Department of Biotechnology along with other organisations. He has experience working in consortium mode projects in three projects under Imprint India scheme. He has been recognised as the Subject Matter Expert (SME) - Interaction Design by the National Program of Technology Enhanced Learning (NPTEL).
In recent years, his research group has actively looked at aspects of turn-taking involving temporal behaviours in human-machine conversations. The group is currently studying half-duplex conversations with Voice-agents. Further, in collaboration with industry partners, the group has contributed to the design and development of TaskSpeech studio – a suite of applications aimed at making the learning of Spoken Dialog Systems easier for novice learners. Lately in 2020, he was invited for the prestigious Dagstuhl Seminar in Dagstuhl Germany. In this seminar, invited experts from academia, scientists and industry members discussed different research issues with respect to Spoken Language Interactions with Voice Agents and Robots (SLIVAR).
Abhishek has contributed to a total of 17 different courses besides collaborating with peers across two different Centre(s)- Centre for Linguistics Science and Technology (CLST) and Centre for Intelligently Cyber-physical Systems for Underwater Explorations (CICPS) at IIT Guwahati. In his past life, he has a series of professional experiences including working in industry and as visiting faculty and being a publishing Cartoonist in different Newspapers.
Abstract: Turn-taking is an intrinsic aspect of sustaining conversations with desired outcomes between the human user and the Voice User Interface (VUI). While it may seem that turn-taking comes naturally to human interlocutors in a conversation, it is perhaps not the case when talking to a VUI. Often such dialogues yield undesirable outcomes where user goals remain unsatisfied. Cascading errors and lesser instances of self-recovery (by the users) in conversation with VUI lead to interactions that experts call the "death of a thousand cuts." Within this context, the talk sheds light on the nature of turn-taking in a half-duplex dialogue with VUI. In addition, it presents a snapshot of the ongoing research and proposes specific strategies to improve turn-taking with VUIs