Tag Archives: Personalized medicine

Personalised medicine, genetics and Big Data: the "New Jerusalem" for dementia?

The fact that there are real individuals at the heart of a policy strand summarised as ‘young onset dementia’ is all too easily forgotten, especially by people who prefer to construct “policy by spreadsheet”.

It is relatively uncommon for a dementia to be down to a single gene, but it can happen. And certainly, even if there might not be ‘cure’ for today or tomorrow, identification of precise genetic abnormalities might provide scope for genetic counseling. Markus (2012) argues that many monogenic forms of stroke are untreatable, and therefore, specialised genetic counseling is important before mutation testing. This could be particularly important in asymptomatic individuals, or those with mild disease; for example, potential cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) patients who have migraine but have not yet developed stroke or dementia. Mackenzie and colleagues (Mackenzie et al., 2006) published on a group of families with a clinical diagnosis of tau-negative, ubiquitin-immunoreactive neuronal inclusions (NII). The authors discussed how findings across the literature appeared to suggest that, in this particular condition, NII are a highly sensitive pathological marker for progranulin genetic mutations and their demonstration may be a way of identifying cases and families that should undergo genetic screening.

But is this genomics revolution the beginning of a “New Jerusalem” in dementia, beyond the headlines?

“Big data” refers to information that is too large, varied, or high-speed for traditional methods of storage, processing, and analytics. For example, one application of mining large datasets that has been particularly productive in the research community is the search for genome-wide associations (“Genome-Wide Association Studies (“GWAS”)). GWAS rely on analysis of DNA segments across vast patient populations to search for DNA variants associated with a particular disease. To date, GWAS analyses have identified a handful of promising genetic associations with Alzheimer’s disease, including Apo E4.

This is clearly wonderful if “money does grow on trees”, but the concern for initiatives such as these such work is resource-intensive, and diverts resources from frontline improvements in wellbeing of people living with dementia. Investors also have to be mindful of their financial return compared to the risk of such initiatives. One of the biggest complaints of proponents of “Big Data” is that data tend to be pocketed in a fragmented, piecemeal fashion.

As the McKinsey Centre for Business Technology (2012) state in an interesting document called, “Perspectives on digital business”:

 “The US health care sector is dotted by many small companies and individual physicians’ practices. Large hospital chains, national insurers, and drug manufactuers, by contrast, stand to gain substantially through the pooling and more effective analysis of data.”

Vast collections of genomic data obviously represent a goldmine for health providers around the world. Meltzer (2013) reviews correctly that personalized medicine been the subject of increased basic and clinical research interest and funding. Meltzer describes that a knowledge of the genetic and molecular basis of clinical heterogeneity should make it possible to more reliably predict the likely outcomes of alternative approaches to treatment for specific individuals and therefore what course of action is likely to be best for any given patient. Knowledge of personal genetic traits might allow accurate prediction of those invididuals who are most likely to experience adverse events through medication (Markus, 2012).

Both ‘Big Data’ and ‘personalised medicine’, in being couched language of bringing value to operational processes in corporate strategy, tend to lose the precise cost-effectiveness arguments at an accounting level. The new CEO of NHS England, Simon Stevens, will have raised eyebrows with the Guardian piece entitled, “New NHS boss: service must become world leader in personalised medicine” from 4 June 2014 in “The Guardian” newspaper (Campbell, 2014) . Whether the National Health Service of the UK can cope with this, with inevitable transfer of funds from the public funds to private funds, with all the talk of ‘sustainability’, is a different matter. It is difficult to predict what the uptake of personalised medicines will be, even if every patient has access to his or her personal genomic sequence in years to come. All jurisdictions have to consider whether they can justify the sharing of information for public interest overcoming concerns about data privacy and security, and ultimately this is a question of legal proportionality.

The pitch from corporate investors tend to minimise biological practicalities too. For example, it is still yet to be determined what the precise interplay between genetic and environmental factors are, particularly for the young onset dementias. And the assumption that all ‘big’ data are ‘good’ data could be a fallacy. There are 1000 billion neurones in the human brain, and it is well known that not all neuronal connections between them are ‘productive’; in fact a sizeable number are redundant. Heterogeneity in genetic sequences might be meaningful, or utterly spurious, and it could be a costly experiment to wait to find out how, when there are more pressing considerations about both care and cure.

But is this genomics revolution the beginning of a “New Jerusalem” in dementia, beyond the headlines?

Frontotemporal lobar degeneration (FTLD) is the second most common cause of dementia in individuals younger than 65 years (Ratnavilli et al., 2002). It is a progressive neurodegenerative disorder characteristically defined by behavioural changes, executive dysfunction and language deficits. The behavioural variant of FTLD is characterised in its earliest stages by a progressive, insidious change in behaviour and personality, considered to reflect underlying problems in the ventromedial prefrontal cortex (Rahman et al., 1999). FTLD has a strong genetic background, as supported by positive family history in up to 40% of cases, higher than what reported in other neurodegenerative disorders and by the identification of causative genes related to the disease (Seelaar et al., 2011). The notion that genetic background might affect disease outcomes and rate of survival, modulating the onset and the progression of the pathological process when disease is overt (Premi et al., 2012). Given the consolidated role of genetic loading in FTLD, the likely effect of environment has almost been neglected.

Only recently, it has been reported that modifiable factors, i.e. education and occupation, might act as proxies for reserve capacity in FTLD. Patients with a high level of education and occupation can recruit an alternative neural network to cope better with cognitive functions (e.g. Borroni et al., 2009; Spreng et al., 2011). But the search for treatments for particular types of dementia based on their underlying genes and genetic products is arguably not an unreasonable one. A good example is provided by the Horizon Scanning Centre of the National Institute for Health Research of NHS England in September 2013 (NIHR HSC ID: 8239): leuco-methylthioninium, which is a “tau protein aggregation inhibitor”. It acts by preventing the formation and spread of neurofibrillary tangles, which consist of aberrant tau protein clusters that aggregate within neurons causing toxicity and neuronal cell death in the brain of patients with certain forms of dementia. Leuco-methylthioninium is a stabilised, reduced form of charged methylthioninium chloride. The clinical trials for this are under way. The medication at the time of writing may or may not work safely.

No. This genomics revolution the beginning of a “New Jerusalem” in dementia, especially when social care is on its knees.



Borroni B, Premi E, Agosti C, Alberici A, Garibotto V, Bellelli G, Paghera B, Lucchini S, Giubbini R, Perani D, Padovani A. (2009) Revisiting brain reserve hypothesis in frontotemporal dementia: evidence from a brain perfusion study. Dement Geriatr Cogn Disord, 28, pp. 130–135

Campbell, D. (2014) New NHS boss: service must become world leader in personalised medicine, The Guardian, 4 June. http://www.theguardian.com/society/2014/jun/04/nhs-boss-world-leader-personalised-medicine.

Mackenzie, I.R., Baker, M., Pickering-Brown, S., Hsiung, G.Y., Lindholm, C., Dwosh, E., Gass, J., Cannon, A., Rademakers, R., Hutton, M., Feldman, H.H. (2006) The neuropathology of frontotemporal lobar degeneration caused by mutations in the progranulin gene, Brain, 129(Pt 11), pp. 3081-90.

Mendez, M. (2006) The accurate diagnosis of early-onset dementia. Int J Psychiatry Med, 36(4), pp. 401– 12.

McKinsey Centre for Business Technology (2012) Perspectives on digital business.

Rahman, S., Sahakian, B.J., Hodges, J.R., Rogers, R.D., Robbins, T.W. (1999) Specific cognitive deficits in mild frontal variant frontotemporal dementia, 122 (Pt 8), pp. 1469-93.

Ratnavalli E, Brayne C, Dawson K, Hodges JR. (2002) The prevalence of frontotemporal dementia. Neurology, 58(11), pp. 1615-1621.

Spreng, R.N., Drzezga, A., Diehl-Schmid, J., Kurz, A., Levine, B., Perneczky, R. (2011) Relationship between occupation attributes and brain metabolism in frontotemporal dementia,  Neuropsychologia, 49, pp. 3699–3703.

Share This:

The G8: when dementia care got personal (well, molecular actually)

Big Data picture

At best, the donation of patients’ DNA for free globally in #G8dementia to enhance Pharma shareholder dividend can be sold as ‘coproduction’. It’s easy to underestimate, though, the significance of the G8 summit. It was overwhelming about the ‘magic bullet’, not the complexities of care. It made great promotional copy though for some.

It was not as such health ministers from the world’s most powerful countries coming together to talk about dementia. It was a targeted strike designed to decrease the democratic deficit which could arise between Big Pharma and the public.

Here it’s important to remember what #G8dementia was not about. It was not about what is a safe level of health and social dementia care is around the world. It had a specific aim of introducing the need for a global collaboration in big data and personalised medicine. Researchers whose funding depends on the wealth of Big Pharma also were needed to sing from the same hymn sheet.

For such a cultural change to take effect into this line of thinking, a high profile publicity stunt was needed. Certain politicians and certain charities were clearly big winners. However, with this, it was deemed necessary from somewhere to introduce an element of ‘crisis’ and ‘panic’, hence the terrifying headlines which served only to introduce a further layer of stigma into the dementia debate.

And yet it is crucial to remember what was actually discussed in #G8dementia.

In a way, the big data and personalised medicine agenda represents the molecular version of ‘person-centred care’, and these are academic and practitioners “circles to be squared”, or whatever.

Big data and personalised medicine have been corporate buzz terms for quite some time, but while it’s widely known there are correlations between the two, many are still struggling with how to effectively leverage mass amounts of data in order to improve efficiencies, reduce costs, and advance patient-centric treatments.

Medicine’s new mantra is “the right drug for the right patient at the right time.”  In other words, medical treatments are gradually shifting from a “one size fits all” approach to a more personalized one, so that patients can be matched to the best therapy based on their genetic makeup and other predictive factors.  This enables doctors to avoid prescribing a medication that is unlikely to be effective or that might cause serious side effects in certain patients.

Personalised drug therapy in its most sophisticated form uses biological indicators, or “biomarkers” – such as variants of DNA sequences, the levels of certain enzymes, or the presence or absence of drug receptors – as an indicator of how patients should be treated and to estimate the likelihood that the intervention will be effective or elicit dangerous side effects. In the case of Alzheimer’s disease, the hunt for a marker in the ‘brain fluid’ (cerebrospinal fluid) has been quite unimpressive. The hunt for those subtle changes in volumes or abnormal protein levels has not been that great. The information about DNA sequences in Alzheimer’s Disease (more correctly a syndrome) is confusing, to say the least. And there at least 100 different types of dementia apart from Alzheimer’s Disease (making the quest for a single cure for dementia even more banal, but a great soundbite for politicians who won’t be in office long anyway.)

With healthcare costs in the U.S. increasing steadily over the last 20 years to 17% of GDP, and similar scaremongering about ‘sustainability’ from economically illiterate people on this side of the Atlantic too, overall moronic healthcare “experts” are looking for every path possible for “efficiency”, “productivity” and “reform”. Many believe that a long-term source of savings could be the use of big data in healthcare; in fact, the McKinsey Global Institute estimates that applying big data strategies to better inform decision making in U.S. healthcare could generate up to $100 billion in value annually.

Significant advancements in personalised medicine, which includes genomics, is making it easier for practitioners to tailor medical treatments and preventive strategies to the characteristics of each patient — advancements that supporters say will improve care and reduce costs. Private markets have long capitalised on fear, and dementia represents a nirvana for private healthcare. It is potentially a huge ‘market’ for drugs. Yet progress is being slowed by a number of factors, including the limited sharing of patient information. This is why there was so much shouting about the need for relaxed regulation at #G8dementia. And yet ultimately, these stakeholders, important though they are, know they can go nowhere without the license from the public. Patient groups and charities represent ‘farms’ for such projects in medicine, as they do for law firms.

Greater sharing, it is argued, would allow medical institutions that are creating patient databases — some with genomic information — to expand the size of the patient pool, thus making it more likely to identify and treat rare conditions. Such discussions necessarily avoid the contentious issue of who actually owns personal DNA information. What’s more important? That patient’s privacy, or the public interest?  Data sharing, it is argued, would also allow patients to personally store and share their data with different practitioners. The day that everyone will have every detail about their personal health on their smartphones isn’t that far off, some hope.

The other component of the data-accessibility issue is how medical researchers should go about building massive databases of patient records. The ultimate application is a big-data program that could analyse a patient’s data against similar patients and generate a course of action for the physician. This is why the #G8dementia want to get seriously ‘global’ about this project.

Data can help practitioners diagnose patients more accurately and quickly, and identify risk factors much earlier.

Edward Abrahams, president of the Personalized Medicine Coalition, has said,

“The tricky part is that the public wants control over information, but as patients they may think differently”.

The creation of this value lies in collecting, combining, and analysing clinical data, claims data, and pharmaceutical R&D data to be able to assess and predict the most efficacious treatment for an individual patient.  This might be possible through ‘big data’ and ‘personalised medicine’ in a number of key areas.

Clinical trials are of course necessary for every drug to get to market, and the gold standard is currently a randomised clinical trial backed up by a published paper. Big data approaches are complementary to traditional clinical trials because they provide the ability to analyse population variability and to conduct analytics in real time. I

Secondly, the ability to manage, integrate, and link data across R&D stages in pharma might enable comprehensive data search and mining that identify better leads, related applications, and potential safety issues. The sequence alone is much more useful as it is correlated with phenotypes and other types of data. This has naturally affected the way companies think about data storage and structure, with cloud solutions becoming more popular. The two leaders in next-gen sequencing technologies, Illumina now offers cloud solutions for data storage and analysis to meet this growing need. Hence it was ‘name checked’ in #G8dementia.

Thirdly once R&D data and clinical trial data is indexed for big data analysis, the third piece of the big data puzzle into routine clinical practice. Ultimately personalised medicine is about this correlation of diagnostics and outcomes, but tailored to each and every patient.

While big data has already been used successfully in consumer markets, challenges remain to its implementation in healthcare. The primary challenge in moving to big data approaches is simply the vast amount of data in existing systems that currently don’t “talk” to one another and have data that exists in different file types. Hence there is considerable talk about ‘harmonisation’ of data at the #G8dementia conference. The second challenge for data in the clinical space is how to store and share these large amounts of data while maintaining standards for patient privacy.  Achieving better outcomes at lower costs (aka ‘doing more for less’) has become the exhausted strapline for the NHS recently, and big data  may seem particular attractive to NHS England in their thirst for ‘efficiency savings’.

However, bridging the “democratic deficit” remains THE fundamental problem. If you though the #G8dementia was like an international corporate trade fair, you may not have been invited to other similar events.

The 16th European Health Forum brought together 550 delegates from 45 countries, to take the pulse of Europe’s healthcare systems five years after the 2008 financial crisis and consider what needs to be done now to build, ‘Resilient and Innovative Health Systems for Europe. The Big Data workshop was organised by EAPM and sponsored by EFPIA, Pfizer, IBM, Vital Transformation, and the Lithuanian Health Forum.

There, key issues to do with ownership, security and trust must be addressed, believed Amelia Andersdotter, MEP: “We have some serious challenges for politicians and industry to preserve citizens’ confidence.”

Vivienne Parry in #G8dementia had wanted to talk about ‘safe’ data not ‘open’ data. And where did this idea come from?

Ernst Hafen, Institute of Molecular Systems Biology, ETH Zurich, has said, “We all have the same amount of health data.” Applying big data to personalised medicine, “Only works if we are comfortable with [our data] being used. We have to provide a safe and secure place to store it, like a bank,” also accoding Hafen. Tim Kelsey used exactly the same language of banks at a recent event on innovations in London Olympia.

If you think you’ve had enough of PFI, you ain’t seen nothing yet. Public-private partnerships open the way for health data to be shared, and so improve research and translation, according Barbara Kerstiens, Head of Public Sector Health, DG Research, European Commission. The aim was, “To get stakeholders working together on data-sharing and access, and ensure there is a participant-centred approach,” she said.

And how will the law react? Case law is an important means by which we know what is patentable at the European Patent Office (EPO). However, sometimes the EPO’s view of what is patentable in an area changes before the case law does. This can sometimes be detected when Examiners start raising objections they would not have previously done. Meetings between the EPO and the epi (the professional institute for EPO attorneys) are very useful forums for obtaining ‘inside information’ about the EPO’s thinking which is not yet apparent from the case law. The June 2012 issue of epi Information provides a report of such a meeting held on 10 November 2011 between the EPO and the biotech committee of the epi.

Discussion item 8 was reported as follows:

‘8. Inventions in the area of pharmacogenomics
This concerns cases which are based on a genetic marker to treat a disease, for example methylation profiles. It can involve a new patient group defined by an SNP. The EPO said that often the claims can lack novelty, as one patient will have inevitably been treated with the SNP, even if the art does not explicitly say so.’

The EPO’s comments seem to indicate that it is about to change the way it assesses novelty when looking at medical use claims that refer to treatment of a specific patient group.

A “SNP” is a form of genetic marker which varies between individuals. The idea behind the relatively new field of pharmacogenomics is that, if you know which SNP variants a patient possesses, you can personalise the drugs given to a patient in accordance with his genetic makeup. It is now recognised that the genetic makeup of an individual can be very influential as to whether he responds to a drug, and so one application of pharmacogenomics is to only give those drugs to patients who will respond to them.

Presently, suitable biomarkers for personalised medicine are proving difficult to find. So it seems that the sector is going to require a lot of investment. There’s where #G8dementia came in handy. But in investors in biotech do like to see that strong patent protection is available in the relevant sector; hence the upbeat rosy approach from the speaker from JP Morgan at #G8dementia who framed the debate in terms of risks and returns.

Personalised medicines, and in fact diagnostics in general, has been thrown into uncertainty in the US after the Supreme Court’s decision in Mayo v Prometheus which found that a claim referring to steps that determined the level of a drug in a patient was directed to a law of nature and was thus not patentable. It would be unfortunate for personalised medicines to be dealt a further blow by the EPO, making the test for novelty stricter in this area.

So there may be trouble ahead.

The #G8dementia was merely the big players, with the help of this Pharma-friendly community in the UK, dipping their toe in the water. It was really nothing to do with frontline health and social care, and any mention of them was really to make the business case look relevant to society at large.

It was for academics interesting in that it was when person-centred care came ‘up front and personal’. Molecular, really.

Share This: