PLENARY TALKS


Dr. Loveleen Gaur

Taylor University,
Malaysia

Topic: Generative AI at the Crossroads of Art, Science, and Ethics.


Abstract: Generative AI is transforming industries, from creating hyper-realistic visuals to revolutionizing personalized healthcare. Yet, its power demands ethical vigilance. This keynote explores critical challenges, including addressing bias, ensuring transparency, and balancing automation with human oversight. Through real-world case studies and cutting-edge research, we'll uncover how interdisciplinary collaboration fuels this technology's potential. Emerging trends like diffusion models and multimodal systems, alongside applications in art, healthcare, and education, will be discussed. Attendees will also examine pressing ethical issues such as bias, intellectual property, and societal impact. Discover actionable strategies for fostering innovation that is inclusive, accountable, and ethically sound.


Dr. Neeranjan Chitare

Birmingham City University,
United Kingdom

Topic: Challenges in dealing sophisticated attacks: AI perspective.


Abstract: Phishing attacks, particularly in the form of sophisticated phishing, remain prevalent threats in the cybersecurity landscape, exploiting individuals'; reliance on communication systems in organisation. Current technology solutions do not, and in some cases cannot, address human vulnerabilities which are routinely exploited in targeted/sophisticated phishing attacks. This talk presents first kind of study done by my team to explore how security practitioners' approach sophisticated phishing attacks and how employees rely on markers when determining the authenticity of email messages. Insight obtained from this research are in the landscape of AI and cybersecurity in terms of both challenges and opportunities.


Dr. Smith Khare

University of Southern Denmark,
Odense

Topic: Role of Explainable AI in healthcare applications.


Abstract: The integration of Artificial Intelligence (AI) in healthcare has transformed diagnostics, decision-making, and patient management. However, the opacity of complex models like deep learning challenges trust, ethics, and regulatory acceptance. Explainable AI (XAI) tackles these issues by making AI systems interpretable to healthcare professionals and stakeholders. Techniques like feature attribution, surrogate models, and counterfactual analysis validate predictions while promoting accountability. XAI is crucial in applications such as disease diagnosis, treatment planning, and outcome prediction, enhancing patient safety by identifying biases and errors. This paper examines XAI’s methodologies, challenges, and applications, highlighting its role in ensuring ethical, reliable AI integration in healthcare.