Imaging system could be applied to human medicine
The first-ever robotics-controlled imaging system for use in standing and moving horses has been launched by the University of Pennsylvania’s veterinary school.
The EQUIMAGINE (4DDI) CT imaging system provides an unlimited range of motion and free access to the horse’s entire anatomy.
With clinical and research application for both animal and human medicine, the system can capture the equine anatomy while the horse is awake, load-bearing or in motion.
“The reason this is so revolutionary is that the robots can easily move around the horse in any orientation,” says Barbara Dallap Schaer, medical director at Penn Vet’s New Bolton Center. “We can do the imaging in a patient that is standing and awake. From a clinical standpoint, we will see elements of the horse’s anatomy that we’ve never seen before.”
The robot-powered imager can collect not only typical, 2D CT images, but also fluoroscopic, or moving images; 3D images via tomosynthesis; and high-speed radiographs, capturing up to 16,000 frames per second.
Because the quality and resolution of the system far exceeds that of existing technology, the team hope to be able to detect injuries hopefully at much earlier stages and prevent things that could be fatal to the horse or rider.
“One of the most important diseases of Thoroughbred racehorses is that they develop certain types of stress fractures that are very difficult to diagnose and characterise,” says Dean Richardson, chief of large animal surgery at Penn Vet’s New Bolton Center.
“This technology has the potential to help diagnose those early enough that we can manage them and help prevent the horse from suffering a catastrophic breakdown on the race track.”
Beyond orthopaedics, the team are set to explore how the technology could be applied to neurology, internal medicine, and sports medicine. They say such progress on the equine front will support significant applications in human medicine, notably for paediatric patients.
“Instead of a child having to be anaesthetised, they could sit there on their iPad and talk to their parents and have the image prepared in 30 seconds,” says Dallap Schaer. “That’s one of the translational pieces we hope to bring to Penn.”
Image (C)Steven Minicola/University of Pennsylvania