Akshay Maurya
About Akshay Maurya
Akshay Maurya is a Product Engineer based in Bengaluru, India, with a background in mobile application development and AI research. He has developed over 30 mobile applications and has held various engineering roles since completing his Bachelor of Engineering in Electrical, Electronics, and Communications Engineering in 2022.
Current Role at Kalam
Akshay Maurya is currently employed as a Product Engineer at Kalam, a company that is part of Y Combinator's Winter 2023 batch. He has been in this role since 2023, working remotely from Bengaluru, Karnataka, India. His responsibilities include developing and optimizing products, contributing to the company's growth and innovation in the tech sector.
Previous Experience at qoohoo
Before his current position, Akshay worked at qoohoo as a Product Engineer starting in 2022. He initially joined the company as a Product Engineering Intern, where he gained valuable experience over a five-month period. His work involved collaborating on product development and enhancing user experience.
Educational Background in Engineering
Akshay Maurya completed his Bachelor of Engineering in Electrical, Electronics, and Communications Engineering at Netaji Subhas Institute of Technology from 2018 to 2022. His education provided him with a solid foundation in engineering principles and practices, which he applies in his professional roles.
Mobile Application Development Experience
Throughout his career, Akshay has developed over 30 mobile applications, including notable projects such as One4Wall, Prism Wallpapers, and BuzzFire. These applications have collectively achieved more than 300,000 installs, showcasing his skills in mobile app development and user engagement.
Research and AI Projects
Akshay has conducted research on advanced AI topics, including ControlNet, LORAs, and HyDE. He has worked on AI projects that involve evaluating Mains answers and generating feedback, as well as creating model answers for various questions. His current research interests include Retrieval-Augmented Generation (RAG) for Large Language Models (LLMs).