A recent paper in the Proceedings of the National Academy of Sciences (Quispe-Tintaya et al, 2013) describes a novel approach to treating metastatic pancreatic cancer. Pancreatic cancer is a deadly disease with a poor prognosis, and a big part of the difficulty in treating it is that it often metastasizes well before the primary tumor is detected. Early stage pancreatic cancer is often asymptomatic, and even the early symptoms are not very specific, often presenting, for example, as bad heartburn or abdominal or back pain. By the time pancreatic cancer is diagnosed, it is very often advanced and incurable.
Radiation therapy as delivered today is primarily a focal therapy, used to treat primary tumors. Once disease has become metastatic, however, radiation therapy is generally no longer curative, and will only delay the eventual death of the patient by cancer. Modern chemoradiotherapy typically only improves survival in pancreatic cancer by a few months.
There have been several attempts at using targeted radionuclides to treat cancer, mainly by attaching radioactive isotopes to monoclonal antibodies (called radioimmunotherapy). This has been shown to be partially effective for refractory non-Hodgkin’s lymphoma. However, radioimmunotherapy has so far delivered very modest results in the treatment of metastatic cancer. Part of the problem with the effectiveness of radioimmunotherapy is that cancers are often very good at suppressing immune system activity in the tumor microenvironment.
The new approach described in this paper turns tumor immune suppression to our advantage. By attaching a radionuclide to an attenuated (non-pathogenic) bacterium, the researchers were able to deliver a lethal dose of radiation to tumors throughout the body, but in a form that was quickly and harmlessly cleared from normal tissues by the regular immune response to the bacteria.
In a mouse model of aggressive pancreatic cancer, the treatment was able to reduce the number of metastases in the treated animals by 90% compared with control groups. While there was a transient spike in radioactivity in the liver and kidneys, this did not produce any visible or detectable functional tissue damage, possibly because the radiation was more effective in damaging the DNA of rapidly proliferating cells in the metastases. The radioactivity was also less effective in treating primary tumors, again probably because the cells in the primary tumors in this model are largely non-proliferating. The therapy could possibly be combined with focused therapy (surgery or radiation ) of the primary tumor.
The authors speculate that this therapy may also be applied to other forms of metastatic cancer. The effectiveness of the therapy may depend upon the characteristics of different forms of cancer, how rapidly proliferating the cells, are, and the degree of hypoxia in solid tumors, as this particular bacterium is known to replicate better in hypoxic environments.
Of course, this is a very early result, and much more work needs to be done to turn this into an effective treatment in human beings. Still, this represents the first demonstration of a new and promising strategy for treatment of an extremely challenging form of cancer. Stay tuned.
Wilber Quispe-Tintayaa,et al, Nontoxic radioactive Listeriaat is a highly effective
therapy against metastatic pancreatic cancer, PNAS (online publication).
In a previous blog, I discussed the abscopal or bystander effect, in which irradiation of a tumor in one part of the body causes the remission of tumors outside of the treated area. Results from mouse experiments seemed to indicate that the remissions were the result of an immune response induced by the radiation.
A recent report by Stamell et al (IJROBP V85, Issue 2, 1 February 2013, Pages 293–295) now describes a clinical case study of a melanoma patient in which a systemic immune response was induced, first by radiation therapy alone, then, after a recurrence, by radiation therapy combined with an immunotherapy drug. This patient presented with metastatic melanoma in the head and neck, normally a disease with a very poor prognosis. The disease progressed despite chemotherapy, and the patient was given palliative electron therapy (800 cGy X 3 fractions) to the largest of his lesions, while many smaller ‘satellite mestatases’ went untreated. 6 weeks after treatment, the treated tumor had regressed, but there was no change in the untreated mets. However, 8 months after therapy, a surprising thing happened. All of the untreated mets resolved too. Blood tests detected antibodies to a melanoma antigen, meaning that the radiation therapy induced an immune response to the cancer. This effect has been observed in the past, but this was the first clinical case in which antibodies specific to the cancer were detected, effectively proving that the radiation caused the patient’s immune system to eradicate the cancer.
The patient remained free of skin cancer for 36 months, but then developed nodal and brain metastates. The patient was treated again, this time treating only the brain mets with stereotactic radiosurgery, but combining the treatment with ipilimumab, a drug designed to enhance the immune response. Again, the treated mets resolved after therapy, but the untreated nodal mets also completely resolved. The patient is still alive and cancer free 7 years after their initial treatment.
The combination of radiation and immunotherapy has the potential to be an extremely powerful therapy for patients with metastatic disease. It seems to be associated with large fractional doses of radiation, the sort that are associated with either palliative therapy or radiosurgery. But many questions remain as to how to quantify or predict the effects of this combination therapy, which will require careful clinical work to clarify. A single case study, although very intriguing, represents only the very early stages of the work. Stay tuned.
Apologies for my long silence, I have been keeping very busy lately (you know how it is…).
There is a recent point/counterpoint in Medical Physics between two admirable opponents, Brian Kavanagh and Geoff Ibbott, on the question of whether or not medical physicists should require additional credentials to perform special procedures like SBRT. Brian Kavanagh argued for special credentialing; Geoff Ibbott argues that the process of ABR certification and maintenance of certification (MOC) is sufficient to qualify medical physicists for special procedures. In addition, Ibbott feels that the last thing medical physicists need is another layer of bureaucracy.
While I am mostly with Dr. Ibbott on the 2nd point, I strongly disagree on the first. The ABR certification process, as rigorous as it is, does nothing to guarantee that a physicist is competent in newer technologies that may have come after the time of their original certification. There is simply not enough time in the original few hours of written and oral exams to comprehensively cover all the knowledge in as broad a field as medical physics. I can only quote from my own experience, but nothing in the process of my ABR certification was related to radiosurgery, for example.
And the requirements for continuing education and quality improvement projects in the MOC process say nothing about the areas of expertise that one maintains. A physicist could spend 10 years studying nothing but brachytherapy and fulfill all the requirements for MOC- this shouldn’t qualify him to do radiosurgery.
Ibbott’s other argument is that the ABR certification process is sufficient to guarantee a candidate’s professional judgment and ethics. My feelings on this point are perhaps best summarized by an exchange from my favorite movie, The Princess Bride:
Inigo: I could give you my word as a Spaniard.
Wesley: You’ll have to do better than that; I’ve known too many Spaniards.
(Just substitute ‘certified physicist’ for Spaniard). I don’t mean to denigrate the profession- the vast majority of physicists I have met and worked with have been competent, ethical professionals. But the certification process certainly is no guarantee of that.
Of course, neither Dr. Ibbott nor Dr. Kavanagh would argue that physicists starting a new program in advanced technology need adequate time and training to do so, the question is whether or not there needs to be a formal certification for this technology, and if so, who should be administering it. In my work as a consultant, I have seen and been involved with many clinics implementing new programs. The physicists involved are often under tremendous pressure to start a program, given little budget and less time to do their work. A credentialing process, similar to the requirements for joining RTOG trials, would actually go a long way to protecting our colleagues from the sorts of errors that come from rushing into a program half-prepared. Certainly it would be preferable to have credentialing controlled within the profession, rather than waiting for state or federal agencies to do it for us.
That’s all for now.
Just back from AAPM, and one of the more interesting things I saw while I was there was a demonstration of a new treatment planning workstation called Raystation, put out by Raysearch Laboratories. Raysearch has in the past contributed software modules to several other planning systems, including Philips Pinnacle and Varian Eclipse, but they decided a few years ago to make a full planning system of their own, which has now been installed in a few centers in the US and Europe.
The most interesting feature of the new system was a function called multi-component optimization, also known in the literature as pareto optimization. Like so much of optimization, pareto optimization arose from the field of economics, originally developed by the Italian economist, engineer and sociologist Vilfredo Pareto. (Aside: Pareto led a fascinating and highly controversial life, praised by some as one of the founders of modern economics, reviled by others for his support for Italian fascism. He is best remembered for his contributions to mathematical economics, and for the Pareto principle, or ‘80/20 rule’.)
In a nontrivial optimization problem, for example, head and neck treatment planning, one typically has multiple objectives that must be met, dose coverage of the target, sparing of multiple organs at risk to peak dose or volume dose limits, and there are trade-offs in how well all of these may be met simultaneously. Often one reaches a point where one cannot further improve target coverage, say, without making the dose to the organs at risk worse.
In one form of pareto optimization, the approach to such a problem is to generate a large representative set of possible solutions, each one focusing on a different constraint, and then to interactively combine them, tuning the solution until it meets the best possible compromise to all objectives. In this example, one would generate a series of treatment plans, one that covered the PTV perfectly, one that completely spared the spinal cord, another that completely spared the parotid glands, etc. Then these multiple solutions would be combined into one final plan that met all of the dose goals at once.
In the Raystation demo that I observed, a head and neck plan was optimized a single time to generate 14 possible solution plans. At the end of the process, the planner is then able to adjust the plan combinations by using slider bars for each constraint. Once one constraint is just met, the planner can lock in that constraint and go on to the next, ie all future solutions must at a minimum meet the ‘locked’ constraint. As more and more of the constraints are locked, the range of adjustments the planner can make becomes more and more limited, and this is shown graphically on the slider bars for each constraint. By going from the most important constraints to the least important, one can find the best possible compromise available.
In this way, one can run a single optimization, and (in theory) always be confident of getting the best possible solution. Of course, a demo is a demo, but I have since heard from one of the early users of this planning system that it works the same way with their clinical plans- they run only one optimization, even on their most difficult plans.
This approach in radiation therapy was first developed at MGH by Thomas Bortfeld and his colleagues. For more in-depth information, the reader is referred to their publications, for example, Craft et al1 (2007)
There have been a series of innovations brought into our field by relatively small vendors-the initial commercial introductions of IMRT, IGRT, robotic radiosurgery and, more recently, MR-based treatment planning, have all come from small companies. The long-term viability of these companies in a small, crowded market like radiation oncology can be an issue, but the innovations eventually get adopted by the field as a whole. Companies like Raysearch are to be commended for taking a chance on new, innovative approaches.
That’s all for now.
1.Craft, D, Halabi,T, Shih,H, and Bortfeld,T .An Approach for Practical Multiobjective IMRT Treatment Planning, IJROBP Vol. 69,(5),pp. 1600-1607 (2007).
Although I am not really a gamer (though I will confess to enjoying Wii sports games) I am fascinated by the incursion of gaming technology into medicine, particularly radiation therapy. The enormous market for gaming (estimated at over $100 billion a year) has led to the development of many technologies that may be adapted for medical use. A recent publication on the use of the Microsoft Kinect ™ (Xia and Sciochi, Medical Physics 2012) is a case in point.
The authors describe using a cheap, widely available gaming camera as a low-cost respiratory gating system- the entire system they describe, although not exactly ‘off the shelf’, was put together for about $600, including the laptop used to run it. The programming was done on a software development kit Microsoft has now released for the Kinect ™ system.
The gating system consists of a Kinect ™ camera, a laptop, and a ‘translation surface’, essentially a reflective plate used to generate a more uniform reflection, free of the surface irregularities of a real patient (caused by wrinkles in clothing, etc). The plate is placed over the patient’s abdomen, performing a similar function to the reflective marker used in the RPM ™ system. By slightly angling the plate, they were also able to improve the depth resolution of the camera from 1 cm to 1 mm.
The authors showed that the signal from the camera system was essentially equivalent to the signal generated by their clinical gating system, a strain gauge belt marketed by Anzai Medical. There were some slight differences in amplitude between the two systems, not surprising since they are measuring different quantities. One shortcoming of the paper is that the authors did not provide any measure of the correlation of phase between the two methods, for those doing phase-based gating.
The paper has the feel of a clever science fair project, which is by no means an insult. Many advances in science and engineering have come from just this kind of clever tinkering with simple and cheap components. The gaming industry has resulted in the widespread availability of advanced technologies. I suspect the possibilities for their use in more serious applications are only beginning to be realized. Stay tuned.
Xia, J. and R.A. Sciochi, “A real-time respiratory motion monitoring system using KINECT: Proof of concept”, Medical Physics 39, no. 5, pp 2682-2685, 2012.
This is going to be a bit different, because it doesn’t directly relate to current research or news items. This is just a pet peeve of mine, and something I think needs to be said about the acquisition of any new technology.
In my work I have travelled to literally hundreds of departments around the country over the last 20 years, usually training the center in new technologies. (In roughly chronological order, IMRT, IGRT, gating/motion management, radiosurgery, SBRT- haven’t gotten to protons yet). There are of course a whole host of other technologies involved in planning and treatment as well: CT, PET-CT, MR, MLCs of various designs, film scanners, chambers, diode and chamber arrays, phantoms simple and complex.
When a hospital acquires new equipment, what happens, more often than not, is that the clinical staff never gets any meaningful experience working on the equipment until someone’s life depends on it. Therapists get shown the rudiments of IGRT by a vendor trainer (physicians sometimes get no training whatsoever), and are expected to be treating patients like experts the next day. 4DCT and gating equipment gets installed, sits unused for 6 months, then the very first patient chosen for gated delivery is an SBRT lung getting 60 Gy in 3 fractions. IMRT and SRS QA phantoms involving complex hardware and software components sit in their boxes until the night before patient treatment.
In many cases, the new technology can do more harm than good, if it is used improperly. I have seen confused therapists using cone-beam CT for the first time, making large and incorrect patient shifts they wouldn’t have considered during conventional setups. I have seen physicists running IMRT QA on equipment without receiving any training on it, and completely misinterpreting the results.
The implementation of any new technology should include some ‘grace period’ where the staff gets to practice in less-critical situations (recognizing that all patient treatments are critical). Run 4DCT scans and perform gating on regular lung patients before you rely on this technology to guide radiosurgery. Use your IMRT/SRS QA phantom on test cases (preferably covering a broad range of target sizes and degrees of complexity) before you have a patient coming in the next morning.
This sort of dry run testing shouldn’t be treated as something to do “when I get around to it.” Staff time for this testing should be scheduled and that schedule protected by management. Time for adequate in-house testing should be part of the budget of any new technology purchase.
Vendors could also help the process along- for example, they could provide a broad range of anonymized patient image data for the therapists to practice with for IGRT image registration. Again, such a project involves some expense, but you would think it would be well worth it to the vendors to ensure that clinics are using their equipment safely and effectively.
Finally, each new version of hardware and software should spur a new round of training and assessment, especially for newer technologies where there can be substantial changes from one version to the next.
Hospitals are willing to spend tens of thousands to millions of dollars acquiring new technology. Yet few are willing to spend a few thousand more to give their staff the time to learn to use them properly. Time is money, but time is also safety, efficacy, and for some patients, life and death.
That’s all for now,
I just returned from the 2012 SRS/SBRT conference, and I learned a new word- ‘abscopal’. The word is derived from the Latin roots ‘ab’ meaning ‘away from’, and ‘scopos’, meaning ‘target’. The effect described is the surprising effect that radiation can sometimes have on cancer adjacent to, or even widely separated from, the primary target. This concept, first described in publication in 1909(!), has gained new relevance in the era of stereotactic radiosurgery (SRS) and stereotactic ablative radiotherapy (SABR).
Back in the early days of radiation therapy, treatments were limited by the low (kV-level) energy of the radiation then available. It was often not possible to deliver effective therapy to deep-seated tumors because of excessive doses received at skin level. In order to mitigate this, a technique called ‘grid therapy’ was developed, where the radiation field was partially blocked, typically delivered through a lead block with a grid of 1 cm holes drilled into it. This technique allowed high doses to be delivered to at least part of deep seated tumors, while being more tolerable at skin level.
Often, the treatment was surprisingly effective at shrinking large, bulky tumors, even though the radiation was only being delivered to a small fraction of the total volume. At the time, no one knew why this happened, or why it was more effective in some patients than others. When higher energy machines (Cobalt 60 machines and linacs) became available in the 50’s and 60’s, grid therapy was largely abandoned. A few departments still do grid therapy for isolated palliative cases, and often see remarkable tumor shrinkage, but until recently little serious study of the technique has been done.
More recent work has begun to decipher exactly what is happening with grid therapy, and how it may be relevant to recent work in radiosurgery. It is accepted now that the high fractional doses given in SRS and SABR have both direct and indirect effects on cancer tissues, directly by causing disruption in cellular function and division in cancer cells, and indirectly by destroying epithelial cells (for example, blood vessels) that support the cancer. Recent work in cell biology has shown that these same high doses cause cancer cells to release cytokines that promote apoptosis (programmed cell death) in endothelial cells.
At the conference, Mansoor Ahmed of the University of Miami described experiments he had performed where a mouse was implanted with tumors on both legs. One tumor was treated with grid therapy to a dose of 10 Gy, the other was left untreated. As expected, the treated tumors began to shrink after therapy. Amazingly, the untreated tumors began to shrink, too. However, if one leg tumor was fully treated with an open field to the same dose, the treatment shrank the treated tumor but had no effect on the untreated tumor.
This ‘bystander’ effect may enhance the ability of high, ablative doses of radiation to destroy tumors, particularly for patients with healthy immune systems. Many issues remain with the effective application of this therapy in patients, including how best to quantify and predict the effects in terms of biologically effective dose, and how this therapy might be effectively combined with more conventional therapies to deliver non-palliative treatments. To me, it’s just fascinating that some of the oldest concepts in radiation therapy may have relevance to the newest technologies in the field. Stay tuned.
That’s all for now,
I have been reading The Emperor of all Maladies, the award-winning ‘biography of cancer’ by Siddhartha Mukherjee. Mukherjee is a medical oncologist and researcher at Columbia University. He claims he was inspired to write the book by a patient, who wanted to better understand the disease she was fighting. Mukherjee quotes Sun Tzu’s the Art of War: “If you know your enemy and know yourself, you will not be imperiled in a hundred battles”.
The book begins with the initial description of cancers by ancient Egyptian and Greek physicians and ends with the stunning recent advances in the knowledge of cancer as a genetic disease, and genetically-targeted therapy drugs like Herceptin. As it turns out, these advances came both from knowing our enemy and knowing ourselves, since the processes used by cancer cells to grow and spread are often modified versions of the processes used in the growth of normal tissues.
I am often impatient reading science books written for a lay audience, but this book was engaging throughout, not just for its science but for the colorful cast of characters one meets along the way. There’s the cocaine and morphine-addicted surgeon William Halsted, who pioneered radical mastectomies, an operation that in many cases turned out to be needlessly aggressive. Who knew that the initial, promising results of lumpectomy plus radiation were first published in 1927?
The book is honest enough to deal with the failures of cancer therapy as well as its successes. The tragic story of the STAMP regimen in high-dose chemotherapy is a case in point. After very positive initial results were published on the effectiveness of high-dose chemotherapy in breast cancer, breast cancer advocates pushed for ’compassionate’ treatment of women before definitive clinical trials could prove its effectiveness. Thousands of women, most outside of clinical trials, were subjected to a highly toxic, aggressive therapy that conferred no benefit. All of this was based on the incredible, and as it turns out fraudulent, clinical data of a single South African doctor.
I must confess, I was largely unaware of much of the history discussed in this book, even the events that occurred when I was in graduate school in a cancer research hospital. Current training of medical physicists includes very little discussion of surgery and chemotherapy- shouldn’t we know more about the major therapy options in our field?
Along the same lines, I was surprised by the almost complete absence of radiation therapy from this book. Indeed, the most recent development in radiation therapy discussed in this book is the use of hemibody radiation in the 1970’s. There is no mention whatsoever of the advances in conformal therapy (IMRT, IGRT, radiosurgery) since that time. I kept wondering as I read this book if this was typical of the perspective of medical oncologists. We would all do well to know our friends, as well as our enemies, a little better.
That’s all for now.
Previously I had blogged about a paper from Harvard (Margalit et al, 2011) on the impact of the introduction of IMRT on the number and type of treatment delivery errors at Brigham and Women’s Hospital in Boston. This month, a new paper from my own institution, the University of Pittsburgh Medical Center (UPMC), adds to this discussion.
Olson et al (2012) looked at data on treatment errors across a large network of academic and community practice centers run by UPMC over a period of three years. They identified errors or potential errors in both IMRT and conventional therapy. They also graded the incidents on an error severity scale, in order to determine whether advanced technology influenced either the number or the severity of the errors seen.
The paper also addresses the rather unique nature of the UPMC network, where IMRT planning is performed at a centralized facility (D3) and a centralized radiation physics division seeks to ensure equivalent quality of care at a large number of clinics (3 academic centers and 16 community practices were involved in this study). Similar to Harvard, a non-punitive error-reporting system is in place at all centers.
The clinical error severity scale (CRESS) employed in this study includes both potential and actual treatment delivery errors. For example, an error found and corrected before the first dose fraction was delivered would be given a score of 1, while an actual dose delivery error that was corrected by altering the dose delivered in subsequent fractions would be given a score of 3. There were no errors resulting in injury or death (scores 8-10) seen in this study.
Again, similar to the Harvard study, the use of IMRT resulted in a markedly lower rate of treatment delivery errors, roughly half the number associated with 3D treatment delivery. The errors with IMRT were also less severe. The rate of error was reported per course of treatment rather than per fraction (reported in the Harvard study), so the numbers are not directly comparable between the two papers.
The study found no difference in error frequency between the academic and community centers. As in the Harvard study, they found no change in the error rate with time, despite the implementation of 39 system-wide policy changes in response to treatment delivery errors over the course of the study.
In the Harvard study, the start of the study period coincided with the introduction of IMRT. In the UPMC study, IMRT and the Quality Improvement committee had been in place for several years before the start of the study period, which may in part explain why no significant change in error was seen during the study.
This paper uses a severity scale for treatment errors and potential errors, while the Harvard paper divided errors into categories (eg patient setup errors, accessory errors, errors in machine settings). It would be good for a national body to further develop a national error reporting system, with categories and a severity scale- the more information the better.
One question left not fully answered by these two papers is the relative effectiveness of the error reporting and mitigation systems put in place. In both papers, the error rate started low and stayed there throughout the period studied. In both clinics, error reporting systems were already well established at the start of the study period. While it is comforting to learn that the introduction of high technology treatment delivery has improved the error rate, it would also be interesting to learn whether the error rate may be further reduced by other interventions, for example the use of checklists. (If you haven’t already done so, go out and purchase and read “The Checklist Manifesto” by the physician and writer Atul Gawande. It is a short book and well worth your time.)
That’s all for now.
Olson, AC et al,” Quality Assurance Analysis of a Large Multicenter Practice: Does Increased Complexity of Intensity-Modulated Radiotherapy Lead to Increased Error Frequency? “, IJROBP vol 82, No. 1, pp e87-e92 (2012)
Magalit DN et al, “Technological Advancements and Error Rates in Radiation Therapy Delivery”, IJROBP vol 81, No. 4, pp e673-e679 (2011)
Radiotherapy and Oncology recently published a special
edition on the frontiers of radiation biology, and, as usual, things on the
frontier can get a little messy. The
journal contained two consecutive articles with seemingly contradictory
results, relating to the new high dose rate beams (flattening filter free, or
FFF) available on new linacs like the Varian Truebeam.
I say ‘seemingly’ contradictory because the articles deal
with different experimental setups, and use different cell lines, so the data
are not exactly equivalent. Such results
are not unusual in biology (said the smug physicist), but rather than
indicating an error, they may point to new information about the response of
cells to radiation damage.
The FFF beams differ from regular photon beams in both their
profile and the amount of dose per individual pulse (for any nonphysicists
still reading, linear accelerator radiation is delivered in short, microsecond
time scale pulses, with different dose rates achieved by changing the pulse
repetition frequency or PRF). The new
FFF beams may have instantaneous dose rates as high as 300 Gy/s, about 4 times
the rate of conventional linac radiation beams.
We all have a natural bias towards papers with positive
results, so let’s start there. Lohse et
al, from the University Hospital in Zurich, Switzerland, irradiated two glioblastoma
cell lines with a single radiation fraction using two different beams from a
Varian Truebeam- the standard 10MV beam and the 10FFF beam. They found that delivering the same dose to
the cells at the same mean dose rate (10FFF beam giving a higher dose per
pulse, but a lower pulse rate), cell survival was reduced with the 10FFF beam,
and this effect was increased the larger the single fraction dose. Giving a single 10Gy fraction, for example,
cell survival in one cell line was reduced by more than 2 times in the cells
receiving the 10FFF beam vs the 10MV beam.
They next reduced the dose per pulse in the FFF beam by
placing attenuating material in the beam, until the dose per pulse matched that
of the regular 10MV beam, and repeated the experiment. This time, there was no difference in the
cell survival between the two beams. Finally, they delivered the same beam (10FFF)
at different pulse repetition frequencies, effectively changing the mean dose
rate from 4 Gy/min to 24 Gy/min. Again,
there was no change in the cell survival for any dose rate.
All this suggests that the cells are susceptible to changes
in the instantaneous dose rate of a microsecond pulse of radiation, even when
the total delivered dose and mean dose rate stay the same. The authors suggest that this may be because
the higher instantaneous delivered dose may induce a more destructive type of
damage (double strand breaks) in DNA.
Now on to the negative result. Sorensen et al, from Aarhus University
Hospital in Denmark, irradiated two different cell lines (Chinese hamster V79
cells and a human SCC cell line). The
authors didn’t have a linac with an FFF beam, but they increased the effective
dose rate to the same levels by moving their cell Petri dishes closer to the
radiation source. This effectively
increased both the dose per pulse and the mean dose rate. In this experiment, they saw no difference
in cell survival for any total delivered dose from 1-10 Gy.
Exactly how does one reconcile two such papers? The first thing to do is to wait for other
labs to repeat and confirm the results published here. The next step might be to try the methods of
one paper on the cell lines of the other, to determine whether the changes are
due to physics or the biology of individual cell lines. These results, if confirmed, might point to a
hitherto unknown way to improve therapeutic ratio. It may turn out, for example, that glioblastoma
cells are sensitive to high dose pulses, but the brain tissue surrounding them
is not. And of course, none of this may
hold together when translated into the even messier environment of a living
human being. Stay tuned.
That’s all for now.