Call For Paper
Workshops
Submission
Registration
Proceedings
Important Dates
Paper Submission Due:
November 10, 2018
Notification of Acceptance:
December 1, 2018
Registration Due:
December 17, 2018
Camera-Ready Paper Due:
December 31, 2018
Conference Date:
July 26-28, 2019
Keynote Speakers
+ 查看更多
Title: High Performance Data Mining: An Essential Paradigm for Big Data Analytics and AI
Speaker: Ankit Agrawal
Abstract: In this age of “big data”, large-scale datasets are increasingly becoming available in all fields of science. Our ability to collect and store this big data has greatly surpassed our capability to analyze it, underscoring the emergence of the fourth paradigm of science, which is data-driven discovery (unifying the first three paradigms of experiment, theory, and simulations). In this talk, I will present our ongoing research on high performance data mining, which aims at a coherent integration of what I consider to be the two enabling and driving technologies for AI from a data-driven point-of-view – high performance computing and data mining – with an emphasis on solving real-world problems via interdisciplinary collaborations. Particularly, I will highlight some of our exciting research results both in high performance data mining (e.g., algorithms for parallel big data clustering) and the application of data-driven analytics in various scientific and engineering domains (e.g., materials science, healthcare, and social media). I will also demonstrate some examples of related high performance data mining software developed in our group.
Biography:
Speaker: Ankit Agrawal
Abstract: In this age of “big data”, large-scale datasets are increasingly becoming available in all fields of science. Our ability to collect and store this big data has greatly surpassed our capability to analyze it, underscoring the emergence of the fourth paradigm of science, which is data-driven discovery (unifying the first three paradigms of experiment, theory, and simulations). In this talk, I will present our ongoing research on high performance data mining, which aims at a coherent integration of what I consider to be the two enabling and driving technologies for AI from a data-driven point-of-view – high performance computing and data mining – with an emphasis on solving real-world problems via interdisciplinary collaborations. Particularly, I will highlight some of our exciting research results both in high performance data mining (e.g., algorithms for parallel big data clustering) and the application of data-driven analytics in various scientific and engineering domains (e.g., materials science, healthcare, and social media). I will also demonstrate some examples of related high performance data mining software developed in our group.
Biography:
Ankit Agrawal (Ph.D. 2009, B.Tech. 2006) is a Research Associate Professor in the Department of Electrical and Computer Engineering at Northwestern University, USA, and specializes in interdisciplinary big data analytics via high performance data mining, based on a coherent integration of high performance computing and data mining to develop customized solutions for big data problems. His research has contributed to large-scale data-guided discovery in various scientific and engineering disciplines, such as materials science, healthcare, bioinformatics, and social media. He has co-authored 100+ peer-reviewed journal and conference publications, including those at top-tier computer science venues such as KDD, ICDM, CIKM, SDM, ICDE, Supercomputing, HiPC, and interdisciplinary ones like Nature Scientific Reports, APL Materials, MRS Communications, npj Computational Materials, Acta Materialia, Journal of Mechanical Design, Journal of American Medical Informatics Association, Journal of Computational Chemistry, IEEE/ACM Transactions on Computational Biology and Bioinformatics, etc. He has also developed and released several open-source software, delivered numerous invited/keynote talks, been on program committees of major research conferences, and served as a PI/Co-PI on more than a dozen sponsored projects funded by various US federal agencies (e.g. NSF, DOE, AFOSR, NIST, DARPA, DLA) as well as industry (e.g. Toyota Motor Corporation Japan).
Title: AI-based Complex Action Recognition in Constrained and Unconstrained Videos
Speaker: Jonathan Wu
Abstract: Video analysis has a crucial role in the visual learning applications such as autonomous driving cars, video retrieval, intelligent monitoring systems, and tele-immersion machines. Most of the video analysis applications highly depend on the human action recognition frameworks. Nowadays, a variety of algorithmic blocks including video feature extraction, feature encoding, and classification methods have been proposed to recognize human actions in pre-segmented video datasets. However, in most of the real-world visual learning applications, the captured videos are not pre-segmented. In other words, the recorded videos may contain several actions in one shot. Consequently, the available action recognition frameworks fail to recognize the action since the real-world videos include a long and complicated temporal structure. To address this problem, we propose efficient video clustering algorithms based on extended ELM-typemethodology to temporally segment the constrained and unconstrained videos into plausible non-overlapping actions. Then, we detect and recognize the key action among multiple temporal clusters in each video. We propose an unsupervised learning methodology using the ELM conceptsto represent a video cluster based on the orders of vital frames. Finally, we offer a hybrid classifier to efficiently leverage different kernels and informative features for categorizing a given video cluster into a proper class of action.A brief overview of other deep learning and security related research activities in the presenter’s laboratory is also provided. Applications have been extended towards intelligent transportation systems, surveillance and security, face and gesture recognition, vision-guided robotics, and bio-medical imaging, among others.
Biography:
Speaker: Jonathan Wu
Abstract: Video analysis has a crucial role in the visual learning applications such as autonomous driving cars, video retrieval, intelligent monitoring systems, and tele-immersion machines. Most of the video analysis applications highly depend on the human action recognition frameworks. Nowadays, a variety of algorithmic blocks including video feature extraction, feature encoding, and classification methods have been proposed to recognize human actions in pre-segmented video datasets. However, in most of the real-world visual learning applications, the captured videos are not pre-segmented. In other words, the recorded videos may contain several actions in one shot. Consequently, the available action recognition frameworks fail to recognize the action since the real-world videos include a long and complicated temporal structure. To address this problem, we propose efficient video clustering algorithms based on extended ELM-typemethodology to temporally segment the constrained and unconstrained videos into plausible non-overlapping actions. Then, we detect and recognize the key action among multiple temporal clusters in each video. We propose an unsupervised learning methodology using the ELM conceptsto represent a video cluster based on the orders of vital frames. Finally, we offer a hybrid classifier to efficiently leverage different kernels and informative features for categorizing a given video cluster into a proper class of action.A brief overview of other deep learning and security related research activities in the presenter’s laboratory is also provided. Applications have been extended towards intelligent transportation systems, surveillance and security, face and gesture recognition, vision-guided robotics, and bio-medical imaging, among others.
Biography:
Prof. Jonathan Wu received his doctoral degree in electrical engineering from the University of Wales, Swansea, U.K., in 1990.He has been affiliated with the National Research Council of Canada for ten years since 1995, where he became a senior research officer and a group leader. He is currently a professor in the Department of Electrical and Computer Engineering at University of Windsor, Windsor, ON, Canada. He has authored over 350 peer-reviewed papers in machine learning, computer vision, image processing, multimedia security,intelligent systems, robotics, and integrated microsystems. His current research interests include machine learning and deep learning, 3D computer vision, interactive multimedia, security, sensor analysis and fusion, and autonomous robotic systems.
Dr. Wu holds the Tier 1 Canada Research Chair in automotive sensors and information systems. He is a fellow of the Canadian Academy of Engineering. He has served either as chair or a member of the technical program committees and international advisory committees for many prestigious conferences. He was an associate editor of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART A, the IEEE TRANSACTION ON NEURAL NETWORKS AND LEARNING SYSTEMS, and the International Journal of Robotics and Automation. He is currently an associate editor of the IEEE TRANSACTION ON CIRCUIT AND SYSTEMS FOR VIDEO TECHNOLOGY, and the IEEE TRANSACTION ON CYBERNETICS.
Title: Neural Imaging Pipelines – the Scourge or Hope of Forensics?
Speaker: Pawel Korus
Abstract: Forensic analysis of digital photographs relies on intrinsic statistical traces introduced at the time of their acquisition or subsequent editing. Such traces are often removed by post-processing (e.g., down-sampling and re-compression applied upon distribution in the Web) which inhibits reliable provenance analysis. Increasing adoption of computational methods within digital cameras further complicates the process and renders explicit mathematical modeling infeasible. While this trend challenges forensic analysis even in near-acquisition conditions, it also creates new opportunities. This talk explores end-to-end optimization of the entire image acquisition and distribution workflow to facilitate reliable forensic analysis at the end of the distribution channel, where state-of-the-art forensic techniques fail. A neural network can be trained to replace the entire photo development pipeline, and jointly optimized for high-fidelity photo rendering and reliable provenance analysis. The network learned to introduce carefully crafted artifacts, akin to digital watermarks, which facilitate subsequent manipulation detection. Analysis of performance trade-offs indicates that most of the gains in detection accuracy can be obtained with only minor image distortion. The findings encourage further research towards building more reliable imaging pipelines with explicit provenance-guaranteeing properties.
Biography:
Pawel Korus received the M.Sc. and Ph.D. degrees (Hons.) in telecommunications from the AGH University of Science and Technology, Krakow, Poland, in 2008 and 2013, respectively. Since 2014, he has been an Assistant Professor with the Department of Telecommunications, AGH University of Science and Technology. He did his Post-Doctoral Research with the College of Information Engineering, Shenzhen University, China. He is currently a Research Assistant Professor with the New York University Tandon School of Engineering, USA. His research interests include various aspects of multimedia security, image processing, and low-level vision, with a particular focus on content authentication and protection techniques for digital photographs. In 2015, he received a scholarship for outstanding young scientists from the Polish Ministry of Science and Higher Education.
Abstract: Forensic analysis of digital photographs relies on intrinsic statistical traces introduced at the time of their acquisition or subsequent editing. Such traces are often removed by post-processing (e.g., down-sampling and re-compression applied upon distribution in the Web) which inhibits reliable provenance analysis. Increasing adoption of computational methods within digital cameras further complicates the process and renders explicit mathematical modeling infeasible. While this trend challenges forensic analysis even in near-acquisition conditions, it also creates new opportunities. This talk explores end-to-end optimization of the entire image acquisition and distribution workflow to facilitate reliable forensic analysis at the end of the distribution channel, where state-of-the-art forensic techniques fail. A neural network can be trained to replace the entire photo development pipeline, and jointly optimized for high-fidelity photo rendering and reliable provenance analysis. The network learned to introduce carefully crafted artifacts, akin to digital watermarks, which facilitate subsequent manipulation detection. Analysis of performance trade-offs indicates that most of the gains in detection accuracy can be obtained with only minor image distortion. The findings encourage further research towards building more reliable imaging pipelines with explicit provenance-guaranteeing properties.
Biography:
Pawel Korus received the M.Sc. and Ph.D. degrees (Hons.) in telecommunications from the AGH University of Science and Technology, Krakow, Poland, in 2008 and 2013, respectively. Since 2014, he has been an Assistant Professor with the Department of Telecommunications, AGH University of Science and Technology. He did his Post-Doctoral Research with the College of Information Engineering, Shenzhen University, China. He is currently a Research Assistant Professor with the New York University Tandon School of Engineering, USA. His research interests include various aspects of multimedia security, image processing, and low-level vision, with a particular focus on content authentication and protection techniques for digital photographs. In 2015, he received a scholarship for outstanding young scientists from the Polish Ministry of Science and Higher Education.
Title: Game Theory for Cyber-Physical System Security and Resilience
Speaker: Quanyan Zhu
Speaker: Quanyan Zhu
Abstract: Advanced Persistent Threats (APTs) have recently emerged as a significant security challenge for Cyber- Physical Systems (CPSs) due to APTs’ stealthy, dynamic and adaptive nature. This talk introduces the game theory as a modeling and design framework for cyber-physical system security and resilience. The long-term interactions between an attacker and a defender can be modeled by dynamic games of incomplete information, where each player has his own private information unknown to the other. Both players act strategically according to their beliefs which are formed by multistage observation and learning. This talk will present methods to develop game-theoretic defense and use the case studies of industrial control systems and critical infrastructures to demonstrate how game-theoretic design of proactive defense-in-depth strategies can protect the system from advanced persistent threats and enhance the resilience of the attacks.
Biography:
Dr. Quanyan Zhu received B. Eng. in Honors Electrical Engineering with distinction from McGill University in 2006, M.A.Sc. from University of Toronto in 2008, and Ph.D. from the University of Illinois at Urbana-Champaign (UIUC) in 2013. After stints at Princeton University, he is currently an assistant professor at the Department of Electrical and Computer Engineering, New York University. He is a recipient of many awards including NSF CAREER Award, NYU Goddard Junior Faculty Fellowship, NSERC Postdoctoral Fellowship (PDF), NSERC Canada Graduate Scholarship (CGS), and Mavis Future Faculty Fellowships. He spearheaded and chaired INFOCOM Workshop on Communications and Control on Smart Energy Systems (CCSES), and Midwest Workshop on Control and Game Theory (WCGT). His current research interests include resilient and secure interdependent critical infrastructures, Internet of Things, cyber-physical systems, game theory, machine learning, network optimization and control. He is a recipient of best paper awards at 5th International Conference on Resilient Control Systems and 18th International Conference on Information Fusion. He has served as the general chair of the 7th Conference on Decision and Game Theory for Security (GameSec) in 2016, the 9th International Conference on NETwork Games, COntrol and OPtimisation (NETGCOOP) in 2018.
Title: Privacy-preserving content-based image retrieval
Speaker: Zhihua Xia
Speaker: Zhihua Xia
Abstract:A huge amount of daily-generated digital images can use up local storage very quickly. Therefore, many users are highly motivated to upload their images to a cloud server for secure storage, cost saving, and convenient access. Apart from the enormous benefits of storage outsourcing, privacy becomes one of the biggest concern. A straightforward solution is to encrypt the images with a standard encryption algorithm before outsourcing, but this will make the utilization of the images inconvenient. Accordingly, to develop a secure method, which can protect the image and meanwhile support efficient processing of the protected data, becomes a popular research topic. This talk will present the current methods and technologies related to privacy-preserving image retrieval. After summarizing the advantages and disadvantage of the existing methods, some suggestions will be made to improve the current privacy-preserving image retrieval schemes further.
Biography:
Zhihua Xia received his Ph.D. degree at Hunan University in 2011. He was a visiting scholar at New Jersey Institute of Technology in 2015 and a visiting professor at Sungkyunkwan University in 2016. He is currently a professor at Nanjing University of Information Science and Technology. His research interests are the digital forensics and privacy-preserving image processing, with particular reference to the fingerprint liveness detection, privacy-preserving image retrieval, privacy-preserving image feature extraction, watermarking in encrypted domain, etc. He has published over 50 papers and holds one patent about the fingerprint liveness detection. His researches on privacy-preserving image retrieval have drawn lots of attention. One of the papers in this field has been cited more than 800 times, and three of his papers become the ESI highly cited papers. He is a managing editor of International Journal of Arts and Technology and International Journal of Autonomous and Adaptive Communications Systems and a guest editor of Journal of information security and applications. He was a workshop chair in ICCCS 2018 and a TCP chair of ICAIS2019.
Biography:
Zhihua Xia received his Ph.D. degree at Hunan University in 2011. He was a visiting scholar at New Jersey Institute of Technology in 2015 and a visiting professor at Sungkyunkwan University in 2016. He is currently a professor at Nanjing University of Information Science and Technology. His research interests are the digital forensics and privacy-preserving image processing, with particular reference to the fingerprint liveness detection, privacy-preserving image retrieval, privacy-preserving image feature extraction, watermarking in encrypted domain, etc. He has published over 50 papers and holds one patent about the fingerprint liveness detection. His researches on privacy-preserving image retrieval have drawn lots of attention. One of the papers in this field has been cited more than 800 times, and three of his papers become the ESI highly cited papers. He is a managing editor of International Journal of Arts and Technology and International Journal of Autonomous and Adaptive Communications Systems and a guest editor of Journal of information security and applications. He was a workshop chair in ICCCS 2018 and a TCP chair of ICAIS2019.
Title: Automated Interview Generation Using Deep Learning on Unstructured Text
Speaker: Tom Masino and Elie Naufal
Abstract: An efficient technique for creating interview bots from unstructured text is developed, in which the bot determines an individual’s understanding of the ontologies mined from the text. The technique was used to identify an individual’s knowledge of products and services of publicly traded companies even though they may not know about the specific corporate entities themselves. To build the interview bot they implemented standard text cleansing processes before 1) applying Named Entity Recognition to identify subject matter, 2) extracting intent phrases using various forms of Dialogue Act Recognition and both Bayesian and non-Bayesian Semantic Analysis, and 3) cataloging ontologies based on our deep learning LSTM models as well as a non-neural network Hidden Markov Model.
Biography:
Dr. Tom Masino received his B.A. degree in Biological Sciences from the University of Delaware, USA, in 1981, and the Ph.D degree in Neurobiology from the University of Chicago, USA, in 1987. From 1987 to 1989, he was a Postdoctoral Fellow in the Department of Neurobiology, Stanford University, USA. From 1989 to 1993, he received a NIH Grant, and worked as a Researcher. Dr. Masino has rich industry experiences. From 1992 to 1995, he was a Principal in the Logicstream. From 1996 to 2008, he was the Chief Executive Officer, and Founder of the Netmosphere. From 1998 to 2003, he was the Chief Technology Officer, and Founder, of the Intravation. From 2003 to 2010, he was a Senior Quantitative Strategist in the Infinium Securities, LLC. From 2010 to 2013 he was a Chief Data Scientist, and Analytics Group Lead in the Integral Development Corp. From 2013 to 2014, he was a Director of Optimization in the Groupon. From 2013 to 2017, he was a Principal in the Applied Deep Learning, LLC. Currently, he is a Director in the TradeWeb, LLC. His working field includes Data Science & Machine Learning, Big Data, Data Engineering, and Software Development.
Elie Naufal has been running investment strategies in hedge funds for 20 years. He managed multi-strategies funds with over $500 million in assets. Versed in machine learning and neural networks, he currently leads modeling at Applied Deep Learning LLC. He received his M.S. degree in Operation Research from the Columbia University, USA.