Researchers inspired ChatGPT to provide exercise recommendations for 26 populations identified in the American College of Sports Medicine’s Guidelines for Exercise Testing and Prescription, considered the gold standard in the field. The population included healthy adults, children and teens, the elderly, people with cardiovascular disease, and people who were obese.
Most of ChatGPT’s recommendations are factually correct, with an accuracy rate of 90.7% when compared to the “gold standard reference source.” However, the researchers wrote that the recommendations were not comprehensive enough, covering only 41.2 percent of the guidelines.
The tool also generated misinformation about exercise for people with high blood pressure, fibromyalgia, cancer, and other illnesses. The answers for people with hypertension were the least accurate, and they didn’t even recommend vigorous exercise, which is appropriate for most people in that group.
AI-generated answers also misinform readers about whether or not they should exercise in the first place, 53% of the time, even when targeted people don’t need to ask a doctor before starting a training plan. He urged people to seek medical clearance before exercising. Researchers warn this could discourage people from exercising and cause undue worry and unnecessary doctor visits.
The researchers said the recommendations were not as readable as they should have been, on average they were considered “difficult to read” and were written at a university level.
Overall, the researchers say that healthcare providers and patients alike should exercise caution when relying solely on AI for exercise recommendations, and that future research should examine “appropriateness, cost, feasibility, and We conclude that there is a need to focus on measuring other factors.


