The article explores the increasing use of artificial intelligence (AI) in modern warfare, drawing parallels between the conflicts in Gaza and Ukraine. It highlights how AI-powered systems like loitering munitions, also known as suicide drones, have been employed by both sides in these conflicts. These drones can autonomously identify and strike targets, raising ethical concerns about the use of lethal autonomous weapons systems (LAWS). The article delves into the potential risks and challenges posed by AI-enabled warfare, including the difficulty in attributing responsibility for strikes, the potential for unintended escalation, and the lack of clear international regulations governing the use of LAWS. It also discusses the potential advantages of AI in warfare, such as increased precision and reduced risk to human soldiers. The article concludes by emphasizing the need for international cooperation and governance frameworks to address the ethical and legal implications of AI-powered warfare.