In this article [Makras et al., 2008: (http://www.ncbi.nlm.nih.gov/pubmed/18252791)], Makras et al. (2008) described a person who had X-linked hypophosphatemic rickets (XLHR), a genetic disorder that impairs the reabsorption of phosphate, from the tubular fluid, in the proximal tubules, and in whom roughly seven years of phosphate supplementation was ultimately required to completely ameliorate his myopathy (muscle weakness, etc.). The authors noted the mysterious quality of the myopathy and their finding that the severity of the myopathy had generally been independent of the person's serum phosphate levels. The authors also noted that the myopathy had been worsened during periods of vitamin D intoxication. I'm not sure if they're talking about calcitriol or vitamin D, but it probably doesn't matter, to some extent. Hypercalcemia could conceivably result from supplementation with either vitamin D (at the high doses used in patients with XLHR) or calcitriol and could cause excessive calcium influx into myocytes, thereby impairing mitochondrial ATP formation, or cause hypercoagulability, etc.
Although the authors wrote that vitamin D usually causes rapidly-emerging improvements in muscle weakness in people who do not have inherited mutations that affect phosphate homeostasis, as in XLHR, it's conceivable to me that the myopathy could have resulted from mitochondrial dysfunction, perhaps as a result of acquired mitochondrial DNA (mtDNA) mutations, as a consequence of the phosphate depletion. The fact that the degree of muscle weakness was independent of the serum phosphate is not surprising, and the intracellular phosphate levels are known to frequently, if not generally, be independent of steady-state serum phosphate levels in normal humans given low dosages of supplemental phosphate. Given that intracellular phosphate depletion is known to deplete ATP and purine nucleotide pools and that the depletion of the pools of purine deoxyribonucleotides can impair mtDNA replication (see past postings), it's conceivable that intracellular phosphate depletion could impair DNA repair and lead to a gradual accumulation of mtDNA or even nuclear DNA mutations. It's worthwhile to note that the maintenance of an adequate pool of each of the major intracellular purine nucleotides is a prerequisite for the maintenance of pyrimidine salvage. I think some of that has been shown in the context of fructose-induced hepatic ATP depletion. I think researchers have shown that fructose can deplete uridine from the liver and transiently elevate plasma uridine, as one might expect in response to fructose loading. Here are some references on that [see page 33 of the chapter of the book by Davies et al., 1998, who found that the plasma uridine levels increased soon after fructose administration in humans and then decreased a lot by 4 hours after a meal; sounds fantastic: (http://scholar.google.com/scholar?q=fructose+uridine+plasma+OR+serum&hl=en)].
Thus, intracellular phosphate depletion could conceivably contribute to the development of mutations in nuclear DNA and to the development of some of those more severe myopathies or intractable disease states, such as chronic fatigue syndrome, by leading to a depletion of both purines and pyrimidines. That's just my opinion, however. It's noteworthy that DNA repair consumes a lot of ATP, and some authors have suggested, as I noted in my old folic acid paper (see past posting), that the depletion of intracellular total folates might cause apoptotic cell death in neurons by "DNA-repair-associated" ATP depletion. They meant that there would be a futile cycle of DNA damage, in response to folate depletion and increases in the dUMP/dTMP ratio, and DNA repair and that the DNA repair would ultimately consume so much ATP as to lead to apoptotic cell death, such as in response to ischemic episodes or strokes that can cause a lot of DNA damage. Davies et al. (1998) argued that fructose-induced phosphate depletion in the liver had caused both the purine depletion, as evidenced by the elevations in serum uric acid, and the uridine export from the liver. I'm not suggesting that more is always going to be better, and those articles on the overlapping mechanisms governing the efflux of uric acid and inorganic phosphate, as discussed in recent postings, suggest that the metabolic cost or competitive inhibitory effects of excesses of intracellular inorganic phosphate could become significant, past a certain point, and derange the transport of organic anions other than uric acid, etc. Although the research suggests that a lot of phosphate would be required to create that type of state, it's worthwhile to discuss these things with one's doctor.
My view is that the data on the dosages of phosphate used in people with XLHR (and in other genetic disorders that reduce phosphate reabsorption) is relevant to normal humans, with regard to the risk of nephrocalcinosis, but I can think of a number of possible objections to that view. The first would be that, in people with XLHR, the rate of phosphate (Pi) reabsorption would be lower than it would in normal people and that that would decrease the risk, in comparison to normal people, of intracellular calcium phosphate precipitation. I should mention that, in that long review on nephrocalcinosis that I recently discussed, the author noted that calcification can occur either extracellularly (and "luminally" or intraluminally), on the luminal membranes of the proximal or distal tubule cells or in the interstices of the tight junctions, or intracellularly, in the cells of the renal tubules. Thus, one could argue that normal people would have the same risk of intraluminal calcification as people who have XLHR would but that normal people would have a higher risk of intracellular calcification, in response to a given dosage of supplemental phosphate, as people who have XLHR would. In normal people, however, the proximal tubules are able to vary the percent reabsorption to between something like 80 and 99 percent, and that means there would be a lot of potential for the proximal tubules to increase the urinary excretion of phosphate in response to some dosage of supplemental Pi. I just think the risks are basically similar for normal people as they are for people who have XLHR.
The reason I'm focusing on the reabsorption is that XLHR doesn't affect the glomerular filtration of serum phosphate, except to the extent that the cells of people with XLHR might be more "hungry" for phosphate and might clear the serum phosphate, from a dosage of phosphate, more rapidly than a normal person's cells might (thereby producing an indirect decrease in the amounts of phosphate filtered per unit time). In other words, the acute elevations in serum phosphate could conceivably be larger in normal people than in people who have XLHR. But that presupposes that a person has no capacity to tell if some change in his or her phosphate to calcium intake ratio, for example, is producing any benefit. If there's no obvious benefit, presumably there wouldn't be an impetus to continue taking any reasonable amount of phosphate, with the approval of one's doctor. It would also, obviously, be important to spread the dosage out across the day as much as possible and to consider limiting any dosage of supplemental vitamin D to 2000-4000 IU or less, given that hypercalciuria is thought to be a major factor that can increase the risk of nephrocalcinosis. Some authors have suggested splitting the total daily dosage of phosphate, in people who have XLHR, into 8 dosages, spread out across the day, instead of the usual practice of splitting the dosage into 4-5 increments. Another objection I can think of would be that the PHEX protein or the Na(+)/Pi cotransporter might be expressed in myocytes or myogenic satellite cells or some other extrarenal cell type. That could mean that mtDNA replication or some other Pi-sensitive metabolic process would be specifically affected in the muscles and would not be likely to show up in normal people. But I don't see how a pure and severe case of Fanconi's syndrome couldn't produce the same kinds of long-term problems in postmitotic cell types as something like XLHR can. It probably wouldn't take 7 years to treat the problem in a normal person, but I just think that there's a need to think of this type of thing with the long view in mind. It's necessary for someone to do long-term safety research using supplemental phosphate in normal people and to use reasonable amounts of dietary calcium, etc. Or someone could do that type of research in people who have chronic fatigue syndrome. I don't know what the best approach would be. One could argue that reasonable and low dosages of phosphate would improve both purine and pyrimidine salvage and could help limit something like the age-associated reductions in mtDNA copy number in different cell types. These are just my off-the-cuff thoughts, but I think the notion that 7 days of "phosphate loading" is enough to make anyone "A-okay," in view of the mechanisms by which both the purine and pyrimidine ribonucleotide pools could become depleted intracellularly, for example, doesn't make a whole lot of sense to me. If the intracellular phosphate depletion is brief, then it makes sense to me that a brief period of time would be required to correct that depletion. But one is not even going to be able to tell if the intracellular phosphate levels are being maintained in some cases, given the frequently-observed independence of the intracellular and extracellular phosphate concentrations. So someone would have to do muscle biopsies or use 31P-MRS intermittently or measure red blood cell 2,3-diphosphoglycerate levels as a surrogate for the measurement of the intracellular Pi levels in myocytes, etc.
No comments:
Post a Comment