Monday, August 31, 2009

Precipitation of Free Zinc by Phosphate in Ex Vivo Brain Tissue: Potential Relevance to Zinc-Induced Neurotoxicity

This article [Rumschik et al., 2009: (http://www.ncbi.nlm.nih.gov/pubmed/19183267)] is interesting, and Rumschik et al. (2009) found that inorganic phosphate (Pi) can chelate zinc and reduce the influx of free zinc into neurons in ex vivo preparations of brain tissue. This is relevant to Alzheimer's disease and other neurodegenerative conditions (and also to depression and other psychiatric conditions, etc.), in my opinion, because an excess of free zinc is generally very toxic to cells and can shut down oxidative metabolism and induce necrotic or apoptotic cell death (see past postings). Rumschik et al. (2009) also found that histidine can enhance the "solubility" of zinc, in the sense that the complex formed by the chelation of zinc by histidine can be transported into cells through dipeptide or amino acid transporters. That's not exactly an enhancement of the solubility of zinc but just means that the zinc-histidine (zinc histidylate or whatever) complex is more soluble than the zinc-phosphate complex(es) are. Glutamine did not meaningfully enhance the solubility, and hence the influx, of zinc, and that's a good thing. The authors mentioned that insoluble, extracellular zinc-phosphate complexes could conceivably become pathological and serve as a site for the nucleation or seeding of Abeta-peptide-containing, extracellular plaques (i.e. a pro-aggregating effect), etc., but I would guess that maintaining an adequate amount of intracellular phosphate in neurons and in the CSF would, in comparison to the effects of intracellular (and extracellular, to some extent) phosphate depletion, tend to exert a net neuroprotective effect. That's just my opinion, but the research has generally shown that intracellular Pi depletion produces neuropathy and neuropathological effects by multiple mechanisms. I tend to think Pi could exert similar effects intracellularly, especially given that the intracellular Pi levels are higher than the extracellular Pi levels. The authors mentioned that the assumption, in the context of ex vivo or in vitro research, has been that the extracellular and CSF Pi levels are around 1 mM, but the human CSF Pi concentration is apparently around 0.47 to 0.50 mM, under normal circumstances. That's potentially important, because, even though the steady-state intracellular and extracellular fluid Pi levels, in different cell types, have generally been found to be relatively independent of one another, the intracellular Pi concentration can correlate with and increase in response to boluses (even small "boluses," meaning significant, single dosages). A relatively higher extracellular fluid Pi concentration might more effectively buffer the intracellular Pi levels during miniature "Pi crises," such as after exercise or, more significantly, after ischemic insults (exercise produces mild ischemia in many organs, but I'm referring to would-be brain injuries, here). There's a tendency to think that all values within a normal range of values, for a blood test or physiological parameter, are equally "good" or equally "normal," but the research on the extracellular and intracellular levels of uric acid and other compounds has shown that this can be a problematic assumption. Small changes in the extracellular uric acid levels can drastically affect the rates of nitric oxide output (and, hence, fairly directly, the peroxynitrite output) by the nitric oxide synthases and the macrophages or monocytes that express those enzymes.

I should mention that increases in acidity, meaning an increase in H(+) availability, can enhance the release of zinc from storage vesicles [Colvin et al., 2000: (http://crab-lab.zool.ohiou.edu/colvin/neurochem.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/10762091); Colvin, 2002: (http://ajpcell.physiology.org/cgi/content/full/282/2/C317)(http://www.ncbi.nlm.nih.gov/pubmed/11788343)], and one could make the argument that the potential for an alkalinizing effect of Pi supplementation, such as in the lower dosage range, on the extracellular fluid could further reduce the kinds of wild fluctuations in free zinc concentrations that can be neurotoxic. But the pH dependences of zinc influx and zinc release are complex (Colvin, 2002), and that suggestion of mine is likely to be an oversimplification. Nonetheless, Pi obviously plays a general role in buffering pH changes, and that acid-base buffering effect could be protective against zinc-mediated neurotoxicity.

Saturday, August 29, 2009

Effects of Changes in Phosphate Availability on Glutamatergic Transmission: Potential Relevance to Alzheimer's or Age-Associated Cognitive Impairment

This article [Glinn et al., 1997: (http://www.ncbi.nlm.nih.gov/pubmed/9200502)] is really interesting, and the authors found that the ATP levels in cultured neurons correlated positively, up to a point, with the availability of inorganic phosphate (Pi). The concentration-dependences found by the authors for that and other correlations, such as of metabolite concentrations with Pi availability, seem to have the potential to be misleading, because many articles have shown that the intracellular ATP or 2,3-bisphosphoglycerate levels do increase in response to seemingly-insignificant, acute increases in extracellular Pi availability (see past postings). It's probably that the availabilities of energy substrates in the culure medium provide the cells with everything they need, and those conditions are unlikely to prevail in vivo. Also, there's some strange issue with the concentrations of free Pi being found by different groups of researchers. Glinn et al. (1997) and other groups have found concentrations in the 30+ mM range, but others have found that the free Pi levels are between 0.8 and 4 or so mM. Maybe there are differences between cell types, but those seem like awfully large differences. I would think it would be difficult for anyone to determine the percentage that would exist unbound, at any given concentration. I'll have to read up on that.

Glinn et al. (1997) also mention some really interesting research suggesting that low Pi availability to the brain may contribute to cognitive dysfunction in people who go on to develop Alzheimer's (reference 30, cited on page 91). They also discuss research showing that Pi availability may help protect against glutamatergic neurotoxicity (i.e. "excitotoxicity"). They mention, earlier in the article, that Pi can be utilized by 3-phosphoglycerate kinase and pyruvate kinase, and I looked into that topic a little bit. In that context, it's interesting that the glycolytic enzymes glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and 3-phosphoglycerate kinase (PGK) form a (dimeric?) enzyme complex, and the 1,3-bisphosphoglycerate (1,3-BPG) that is formed from glyceraldehyde-3-phosphate (GAP), by GAPDH, is then channeled, apparently, to PGK and converted into 3-phosphoglycerate [Ikemoto et al., 2003: (http://www.jbc.org/cgi/content/full/278/8/5929)(http://www.ncbi.nlm.nih.gov/pubmed/12488440?dopt=Abstract)]. What's interesting is that, essentially, Pi is used to fairly directly (via its incorporation into 1,3-BPG) phosphorylate ADP into ATP, and the ATP formed by the GAPDH-PGK complex is preferentially used to transport glutamate into presynaptic vesicles (exogenous ATP is not as effective in promoting vesicular glutamate transport) (Ikemoto et al., 2003). A lot of researchers refer to that ATP-requiring transport as "glutamate uptake," but that terminology could potentially cause one to confuse the process with synaptic glutamate uptake. Ikemoto et al. (2003) are talking about the vesicular glutamate transport that "loads" glutamate in presynaptic vesicles for release. There's other research showing that mitochondrial glutaminase (and the mitochondria that contain it) is localized at the sites of synaptic glutamate uptake and that Pi availability, especially insofar as its availability is important for the activation of glutaminase in astrocytes, plays a role in maintaining synaptic glutamate uptake. Some of those articles that cite Glinn et al. (1997) look interesting (http://scholar.google.com/scholar?cites=5143060761810849298&hl=en).

Friday, August 28, 2009

Regulation of Hexokinase Activity by Inorganic Phosphate and Magnesium

The authors of this article [White and Wilson, 1990: (http://www.ncbi.nlm.nih.gov/pubmed/2306121)] discussed the fact that inorganic phosphate (Pi) antagonizes the inhibitory effect of glucose-6-phosphate on hexokinase activity, and the net result of an increase in the free, intracellular Pi concentration is likely to be an increase in hexokinase activity. The authors of some of these articles make statements that imply or "state" that hexokinase activity is tightly regulated, but that type of statement is invariably based on the assumption that "adequate" or saturating concentrations of magnesium, Pi, and other regulatory factors (such as purine nucleotides, given that MgADP(-), MgATP(2-), Mg2+ itself, and other nucleotides, alone or complexed with Mg2+, regulate hexokinase activity in complex ways by binding to the multiple nucleotide or other-anion binding sites on the enzyme) will exist in cells. Many authors explicitly state that they're making that sort of assumption, and it's unlikely to be valid, in many cases. Magnesium deficiency is known to be widespread, and it's likely, in my opinion, that intracellular Pi depletion is also common. It's interesting that magnesium is an allosteric activator of hexokinase, and magnesium also influences the numbers of allosteric sites that MgATP(2-) can be bound to and the affinity of the binding of MgATP(2-) to hexokinase as a substrate [Bachelard, 1971: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1178047&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/5158910)]. That's a complex and somewhat confusing article, in part because the author reported some of the data in terms of the "Mg2+/ATP ratio, and it can be difficult to understand if the author is saying that an excess of Mg2+ is going to be bound to the enzyme as a free cation or what exactly the concentrations of the individual species are going to be.

In spite of the complexity, exogenous magnesium generally is going to produce effects that are consistent with an increase in hexokinase activity and in the overall rate of glycolytic activity, in my opinion, and that could reasonably be expected to increase intracellular Pi. Magnesium could also tend to increase the sequestration of Pi in glucose-6-phosphate but could also conceivably lead to an increase in Pi turnover, over time (such as by inducing changes in the intracellular pH, as a result of the increases in the activities of glycolytic enzymes). It's interesting that an important mechanism leading to an increase in phosphate uptake into cells is an insulin-induced increase in hexokinase activity [Siddiqui and Bertorini, 1998: (http://www.ncbi.nlm.nih.gov/pubmed/9572247)], and the extra intracellular Pi would have the potential to further activate hexokinase and other glycolytic enzymes. This would be fine, assuming that the extracellular Pi levels are not going to fluctuate wildly and decrease, in response to this type of insulin-induced, somewhat-tissue-specific increase in Pi transport into cells. But it's possible to see the way the manifestations of intracellular Pi depletion (and intracellular Pi depletion itself) could be elusive. There tend to be these strange effects, whereby the factors that are detrimental to Pi homeostasis (i.e. insulin-induced hypophosphatemia) in the short term (or in specific contexts or across multiple organs) are beneficial in the long term and overlap with the effects of Pi itself in complex ways. Assuming that an increase in intracellular Pi enhances insulin sensitivity and activates hexokinase (Pi probably increases insulin sensitivity, in part, *by* activating hexokinase) and other glycolytic enzymes, then would the increase in insulin sensitivity have the potential to further increase Pi uptake and decrease serum Pi in pathological ways, under some circumstances? There was that article I cited, in a recent posting, showing that the postprandial urinary excretion of Pi was inversely correlated with insulin sensitivity, but, presumably that wouldn't always be the case. I wonder if hypophosphatemia or wild swings in intracellular and extracellular Pi might occur, in the short term, in response to the introduction of intensive insulin therapy and contribute to the short-term worsening of neuropathy and other disease processes in people who have diabetes [Leow et al., 2005: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artinstid=1743196&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/15701742)]. Even though that type of protocol for administering insulin tends to be beneficial in the long term, there can be short term issues and transient worsening of neuropathy, etc., in some people.

Thursday, August 27, 2009

Phosphate Depletion Associated With Hypoxia, Autonomic Neuropathy, Hypoventilation, or Paralysis: Potential Relevance to Sudden Infant Death Syndrome

These [Siddiqui and Bertorini, 1998: (http://www.ncbi.nlm.nih.gov/pubmed/9572247); Gravelyn et al., 1988: (http://deepblue.lib.umich.edu/bitstream/2027.42/27325/1/0000348.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/3364446); Steckman et al., 2006: (http://www.ncbi.nlm.nih.gov/pubmed/16642427); Heames and Cope, 2006: (http://www.ncbi.nlm.nih.gov/pubmed/17090245)] are some interesting articles that show some of the variegated manifestations of hypophosphatemia. A crucial fact that I've taken from the research on phosphate homeostasis (it's arguably the most crucial point) is that neither the steady-state nor the between-dosage (in the context of phosphate infusions in animals or phosphate supplementation in humans) intracellular phosphate levels, in either muscle cells or red blood cells, correlates with the serum phosphate levels. For example, Chobanian et al. (1995) [Chobanian et al., 1995: (http://www.ncbi.nlm.nih.gov/pubmed/7900836)] found that the intracellular ATP concentrations in cells in the proximal tubules correlated positively with the intracellular inorganic phosphate (Pi) concentrations, and artificially-induced changes in extracellular Pi concentrations produced changes in the intracellular Pi concentrations. But in human studies, the intracellular Pi values generally do not correlate with serum Pi values, and the intracellular Pi concentrations can be significantly depleted in a person who has a normal serum Pi level.

That usual absence of a correlation between intracellular and serum Pi concentrations means, in my opinion, that intracellular phosphate depletion, in "normophosphatemic" people, should be considered as a possible factor contributing to some of these conditions that have been associated with hypophosphatemia. Siddiqui and Bertorini (1998) cited research showing that phosphate depletion can produce neuropathy that mimics Guillain-Barre syndrome, and the authors described the symptoms of a patient who developed neurological symptoms after she had been given parenteral nutrition without phosphate. The manifestations of neuropathy were suggestive of demyelinating polyneuropathy but were rapidly reversed by phosphate supplementation, meaning that there wasn't demyelination. The authors also discussed the fact that an increase in hexokinase activity, in response to insulin that has been released after the intake of carbohydrates, is thought to be an important factor that mediates the carbohydrate-induced increase in the transport of phosphate into cells and the decrease in serum phosphate that can result from that transport (Siddiqui and Bertorini, 1998). The authors also cited research showing cognitive dysfunction and encephalopathy in hypophosphatemic or (merely) intracellular-phosphate-depleted people (Siddiqui and Bertorini, 1998). One interpretation of the article by Steckman et al. (2006), in which gallstone-induced pancreatitis occurred in conjunction with hypophosphatemia and improved in response to phosphate administration, is that the phosphate depletion was causing neuropathy and interfering with gallbladder contractions. Neuropathy is known to be associated with gallbladder disease, and the normal functioning and contraction of the gallbladder is regulated by its autonomic (and sensory) innervation [(http://scholar.google.com/scholar?q=neuropathy+gallbladder+gallstone&hl=en);
the visceral sensory innervation can influence mast cell degranulation in the gallbladder, via the efferent-action-potential-mediated release of neuropeptides, and changes in mast cell degranulation and neuropeptide release can influence the autonomic regulation of gallbladder functioning, etc.: (http://scholar.google.com/scholar?hl=en&q=%22mast+cell%22+gallbladder+CGRP+OR+%22substance+P%22+OR+%22vasoactive+intestinal+peptide%22)]. Another interpretation would be to say that the phosphate depletion caused ATP depletion in the liver and led to cholestasis, etc. Similarly, the respiratory muscle weakness found in association with hypophosphatemia or low serum phosphate levels (Gravelyn et al., 1988, cited above) could be a result of autonomic dysfunction, particularly given that hypophosphatemia can cause reversible quadriparesis (paralysis, meaning the people are transiently quadripalegics) (http://scholar.google.com/scholar?hl=en&q=quadriparesis+hypophosphatemia). The hypoventilation that can accompany hypophosphatemia could also be due to autonomic neuropathy and ATP depletion in parts of the brain (http://scholar.google.com/scholar?hl=en&q=hypoventilation+hypophosphatemia). Hypophosphatemia has also shown up in association with extrapontine myelinolysis (which is central "pontine" myelinolysis that doesn't occur in the pons, essentially), one of the forms of osmotic demyelination that can result from the excessively-rapid correction of hyponatremia with intravenous, hypertonic saline [Qadir et al., 2005: (http://www.jpma.org.pk//PdfDownload/759.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/16045098)]. The authors suggested that ATP depletion in glial cells in parts of the brain might have contributed to the case, but I'm not sure that the authors actually said that the phosphate might have contributed to or caused the ATP depletion. The intracellular phosphate may well have been depleted in parts of the brain, and that depletion may have impaired volume regulation and predisposed to the osmotic demyelination.

In any case, I found this article showing "sinusoidal" seasonal changes in the incidence of sudden infant death syndrome (SIDS) (the seasonal change in the incidence shows up in the Southern and Northern hemispheres, and SIDS was found to peak in the winter in both hemispheres) [Douglas et al., 1996: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2351134)(http://www.ncbi.nlm.nih.gov/pubmed/8646093)], and there's old research suggesting an association of SIDS with vitamin D depletion or differences in vitamin D metabolism or rickets, etc. [Schluter, 1996: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=2352183&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/8842097); (http://scholar.google.com/scholar?hl=en&q=%22sudden+infant+death%22+%22vitamin+D%22)] (or with other light-associated changes, such as involving changes in melatonin levels induced by sleeping on the back as opposed to the side, etc.) (Douglas et al., 1996). There's also research showing that infants who were experiencing apnea were more likely to be hypercalcemic than infants not experiencing apnea [Kooh and Binet, 1990: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1452283&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/2207905)]. I couldn't get results to show up on a quick search, but hypercalcemia has been found to occur in hypophosphatemic people. Although Kooh and Binet (1990) didn't find that serum phosphate levels were associated with apnea in any way, the serum Pi levels wouldn't have to. Given that intracellular Pi levels do not reliably correlate with serum Pi levels and that phosphate depletion is known to be capable of causing respiratory paralysis/hypoventilation and hypoxia and neuropathy [see this and many others, some of which I discussed above: Weber et al., 2000: (http://www.ncbi.nlm.nih.gov/pubmed/10663486)] [and given that vitamin D depletion is known to be a cause of phosphate depletion (and that vitamin D supplementation, even in the absence of any genetic defect specifically involving vitamin D receptor signalling)], one possibility is that intracellular phosphate depletion in parts of the brain (and in the red blood cells, causing low-level hypoxia that might gradually have more severe consequences) could contribute to some cases of SIDS. Although there was one small study showing no apparent depletion of 25-hydroxyvitamin D levels in the context of SIDS, there could very easily be different degrees of intracellular phosphate depletion among infants with the same 25(OH)D levels. And looking at the serum phosphate levels wouldn't necessarily show anything, given the lack of correlation of intracellular and serum Pi levels. Someone would have to use MRS scans or look at the intracellular 2,3-DPG or Pi levels in red blood cells in infants, instead of just looking at the serum Pi. It's interesting that Heames and Cope (2006) (cited above) found that they could reduce the rate of infusion of noradrenaline in a manner that was proportional to the increase in serum phosphate, in a person who had developed transient heart failure from postsurgical phosphate depletion. The phosphate depletion basically caused hypotension, and the interactions with noradrenaline are really interesting (the usual thing people discuss is the fact that adrenergic drugs decrease serum phosphate by promoting phosphate uptake into cells). Given the changes in the autonomic regulation of blood pressure that occur in response to changes in the orientation of the body, such as in a baby sleeping prone vs. supine (http://scholar.google.com/scholar?q=autonomic+orthostatic+prone+supine&hl=en), it's possible that there's a kind of feed-forward depletion of intracellular phosphate in parts of the brain that can lead to apnea and then increased ventilation to compensate (and then phosphate depletion because of that and because of the noradrenaline released in response to that, as in the stress response to hypoxia, and to the potential vitamin D-depletion-induced renal phosphate wasting, etc.).

Arguably, the most well-established cause of hypophosphatemia is alkalosis induced by hyperventilation (http://scholar.google.com/scholar?q=hyperventilation+alkalosis+hypophosphatemia&hl=en), and apnea commonly occurs in response to post-hyperventilation alkalosis (http://scholar.google.com/scholar?q=hyperventilation+apnea&hl=en). So the alkalosis, in response to hyperventilation (as in response to autonomic dysfunction during sleep, resulting from changes in the sleep position and from phosphate depletion in neurons or smooth muscle cells or muscle cells in the diaphragm), could drive phosphate into cells outside the brain, thereby reducing phosphate availability to the brain, and then that could gradually set the stage for more severe episodes of hypoxia, more autonomic dysfunction due to the phosphate depletion in the brain, etc. There's evidence of repeated episodes of hypoxia in some research on SIDS [see Takashima et al. (1978) and Rognum et al. (1991): (http://scholar.google.com/scholar?hl=en&q=%22sudden+infant+death%22+hypoxia)]. A decrease in the responsiveness of smooth muscle cells (or other cell types, as in neurons in the brainstem, in the context of phosphate depletion) to noradrenaline occurs in people who have orthostatic hypotension and other derangements of baroreceptor functioning, and L-threo-3,4-dihydroxyphenylserine (DOPS) has been researched as a treatment for orthostatic hypotension and orthostatic tachycardia (DOPS is a precursor of noradrenaline) (http://scholar.google.com/scholar?hl=en&q=orthostatic+DOPS). Hypophosphatemia has been associated with instability in blood pressure, in association with postural hypotension and other problems with the sensitivity and functioning of the baroreceptor reflexes (http://scholar.google.com/scholar?hl=en&q=orthostatic+hypophosphatemia). Anyway, I just put those types of crude thoughts up on this blog.

Wednesday, August 26, 2009

Supplemental Phosphate (and Calcitriol) in Hereditary Forms of Hypophosphatemia: Potential Relevance to Phosphate Dosages & Responses in Normal Humans

This article [Reusz et al., 1990: (http://fetalneonatal.com/cgi/reprint/65/10/1125.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/2248503)], along with other articles that describe the adverse effects or absence of adverse effects of different dosages of supplemental phosphate in people who have X-linked hypophosphatemic rickets (XLHR) (or autosomal dominant hypophosphatemic rickets), are likely to be relevant to an understanding of the risks (or lack thereof) of phosphate supplementation in humans who don't have genetic disorders. Sitara et al. (2004) [Sitara et al., 2004: (http://www.geocities.com/razzaquems/MatrixBiology.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/15579309)] noted that the mechanisms underlying the hypophosphatemia in those two sets of genetic disorders are not perfectly understood, but the actions or serum levels of FGF-23 (fibroblast growth factor-23) are augmented in both sets of genetic disorders. The PHEX gene product is an endopeptidase, a protease enzyme, and mutations in that gene evidently are the root cause of XLHR and, among other phenotypic changes, serve to augment the actions of FGF-23 (http://scholar.google.com/scholar?q=phex+hypophosphatemia&hl=en). One could make the argument that the hyperphosphaturia in people who have those genetic disorders would cause those people to be at a lesser risk of developing ectopic calcification, as in response to any given dosage of supplemental phosphate. But I don't think that's true. Researchers have reported many cases of nephrocalcinosis, which is calcification of parts of the kidneys and would be the main risk of (particularly excessive) phosphate supplementation (in my opinion), in people with XLHR who have taken the combination of phosphate and hormonal vitamin D (HVD), which is calcitriol (1alpha,25-dihydroxyvitamin D3), that has been the standard therapeutic approach to treating the hypophosphatemia in those disorders. FGF-23, a protein that is "hyperfunctional" in these genetic forms of hypophosphatemia, decreases renal HVD formation and decreases phosphate reabsorption by proximal tubule epithelial cells. With regard to HVD formation, one could make the argument that the decreases in serum HVD, in many people who have these genetic disorders, would make the supplemental HVD less toxic than it would be in normal people, thereby confounding an attempt to sort through the risks of HVD vs. supplemental phosphate and to get a sense of the risks of different dosages of phosphate in normal people. But I don't think that's likely to be a valid reason for ignoring the data in some of these articles, either, because HVD seems to have been causing the same hypercalciuria and hypercalcemia in people with genetic hypophosphatemia as it tends to in normal humans.

The dosages of phosphate that have been associated with nephrocalcinosis in humans, as described by Reusz et al. (1990), are really high (a mean of 136.4 mg/kg bw/day, or 9548 mg/day, for a 70-kg human), and the "lower" range of dosages of phosphate (50-100 mg/kg bw/day, which is about 3500-7000 mg/day, or a mean of 69.9 mg/kg bw/day, which is 4893 mg/day) were not associated with nephrocalcinosis but were still quite high. Those dosages (more than 4000-5000 mg of phosphate/day, from any supplemental phosphate and food-derived phosphate, combined) are similar to the dosages that, for example, Heaney (2004) [Heaney, 2004: (http://www.mayoclinicproceedings.com/content/79/1/91.full.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/14708952)] was saying would potentially cause problems in humans. But almost no one ingests anywhere near those amounts of phosphate (which were, as discussed, not associated with nephrocalcinosis) per day, and a maximum of only 2300 mg/day of supplemental phosphate was required to treat people (who did not have genetic disorders) who displayed idiopathic (cause-unknown) phosphate depletion ("phosphate diabetes"). My point is that it's not a choice between the use of massive amounts of phosphate and the appalling consequences of the phosphate depletion that could occur, in the 21st century, here, in people who ingest only sources of "phytates," in whole grains and other vegetable- and plant-derived foods, that may provide little utilizable phosphate. There's a middle ground between the use of high doses of phosphate (and the state of blind terror, at the prospect of phosphate-induced nephrocalcinosis, that could go along with that) and the sense of "comfort in the majority viewpoint" that seems to potentially go along with phosphate deprivation and with the development of hypoxic brain injuries and osteomalacia and arthropathy (potentially, neuropathic, degenerative arthropathy/osteopathy) (http://scholar.google.com/scholar?hl=en&q=hypophosphatemia+osteopathy+OR+arthropathy) (it seems to me that the lower back pain and lumbar vertebral collapse/degeneration that characterize phosphate depletion are somewhat reminiscent of the neuropathic arthropathy seen in Charcot foot disease, for example, meaning that the symptoms and manifestations could be partially neuropathic in origin) that can result from intracellular phosphate depletion.

Also, Goodyer et al. (1987) [Goodyer et al., 1987: (http://www.ncbi.nlm.nih.gov/pubmed/2822887)] discussed the dosage range of HVD (40 ng/kg/day, or 2800 ng/day, for a 70-kg human) that had been associated with the development of nephrocalcinosis in people with XLHR or autosomal dominant hypophosphatemic rickets (ADHR), and researchers have generally used very high dosages of either vitamin D2, vitamin D3, and/or HVD in people who have had those disorders. Goodyer et al. (1987) supposedly found adverse effects associated with an intake of 4000 IU/day of vitamin D2 in people with XLHR or ADHR, but one wonders if, given all of the problems, reported in old articles, with vitamin D supplements containing ten times the labeled content of vitamin D, the dosage was actually higher. That dosage range (4000 IU/day) of vitamin D has not been reported to cause hypercalcemia in studies in normal humans. Another possibility is that the people were taking vitamin D and HVD and that the hypercalcemia was attributed to the vitamin D (as opposed to the HVD, which is the more likely culprit, in my opinion). I say that because I've never seen any case report in which a person with XLHR or ADHR was given, as a standalone treatment, only a low dosage of 4000 IU/day of vitamin D3 or vitamin D2. In most cases, the dosages have been massive, and hypercalciuria seems more likely to be a cause of the nephrocalcinosis than phosphate supplementation per se, in most of these people. Gross et al. (1998) [Gross et al., 1998: (http://www.ncbi.nlm.nih.gov/pubmed/9598513)] found that 2.5 ug HVD/day (2500 ng/day), in normal humans who had prostate cancer, caused hypercalciuria in everyone, at dosages ranging from 1500-2500 ng/day. Reisz et al. (1990) argued, despite the past research that had associated hypercalciuria with nephrocalcinosis and that they cited, that hypercalciuria had been associated more with the development of kidney stones than with the development of nephrocalcinosis, but, in most trials in people who have not had XLHR or ADHR, the participants have not taken both HVD and phosphate supplements, in massive dosages. The dosages of vitamin D (198-1370 IU/kg/day, or 13860-95900 IU/day) and HVD (5-35 ng/kg/day , or 350-2450 ng/day) are large and, perhaps not surprisingly, the people who displayed nephrocalcinosis had been the ones who had experienced multiple episodes of hypercalciuria or hypercalcemia. Nephrocalcinosis requires pathologically-increased concentrations of both calcium and phosphate, usually, to occur. Additionally, Seikaly et al. (1996) [Seikaly et al., 1996: (http://www.ncbi.nlm.nih.gov/pubmed/8545232)] found that nephrocalcinosis was more common in people who were taking HVD and phosphate and who had renal tubular acidosis. Metabolic acidosis, in the proximal tubule epithelial cells that reabsorb most of the phosphate from the tubular fluid, can cause urinary phosphate loss, but intracellular phosphate depletion can also be an important cause of metabolic acidosis. Thus, metabolic acidosis can be both a cause and a consequence of intracellular phosphate depletion, and it's important to remember these types of complexities. The insulin resistance and mitochondrial toxicity that can result from chronic phosphate depletion have the potential to actually increase the risk of calcification, because inorganic phosphate is constantly going to be "dumped" from its "storage" in intracellular phosphocreatine and adenosine nucleotide pools, etc. That's been suggested to be one mechanism for tissue-specific calcification in any number of disorders, in people who do not have XLHR. When there are these constant metabolic crises, such as can occur in response to intracellular phosphate depletion, there will tend to be these frequent, intermittent "episodes" in which phosphate is lost from cells or from its intracellular binding to organic compounds (such as creatine) and elevated, pathologically, in the extracellular or intracellular (cytosolic or mitochondrial) fluid. More specifically, for example, rhabdomyolysis is a fairly common result of intracellular phosphate depletion, and rhabdomyolysis can cause both wild elevations in serum phosphate (and, hence, phosphate concentrations in the kidneys, potentially contributing to nephrocalcinosis) and elevations in myoglobin and other proteins, released from necrotic muscle cells. Chronic rhabdomyolysis, such as in response to exercise in a person who is phosphate-depleted, has the potential to cause more kidney damage than phosphate supplementation ever would, and there are reports of people dying from rhabdomyolysis that was associated with (and likely to have been a consequence of, in my opinion) intracellular phosphate depletion in muscle cells and other cells (http://scholar.google.com/scholar?hl=en&q=hypophosphatemia+rhabdomyolysis+renal+failure).

Furthermore, "high" intakes of phosphate are well known to actually decrease urinary calcium excretion [Hegsted et al., 1981: (http://jn.nutrition.org/cgi/reprint/111/3/553.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/7205408); LaFlamme and Jowsey, 1972: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=292432&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/5080411)], and the high urinary calcium excretion that accompanies phosphate depletion is thought to be due, in large part, to the ongoing breakdown of hydroxyapatite in the bone tissue [Laroche et al., 1993: (http://www.ncbi.nlm.nih.gov/pubmed/8358977)]. Incidentally, I scaled those dosages of phosphate that were given to dogs and that were associated with calcification, in the study by LaFlamme and Jowsey (1972), and the equivalent human dosages would be massive (I did the calculations awhile ago, and it works out to 8000-some mg/day or more of phosphate). I honestly don't understand what the problem is with those types of dosage considerations in research in animals. The disregard for the physiological norms, in the context of dosages of so many nutrients or compounds that are given to animals, is significant, in my opinion, and is an ongoing issue in animal research.

Incidentally, Laroche et al. (1993) discussed the neuropsychiatric and pain-related manifestations of intracellular phosphate depletion and also found that people who had phosphate diabetes (intracellular phosphate depletion) displayed symptoms consistent with reflex sympathetic dystrophy. That's a condition that's mysterious and that causes bizarre, extreme pain and other symptoms. I don't have time to go into all of that, but it's basically more evidence that neuropathy and neurological damage can sometimes be one manifestation of phosphate depletion, in my opinion (and is evidence that the "back pain" or "bone pain" of phosphate depletion may be neuropathic in origin and may not have to do with bone problems per se, independent of the central nervous system).

It's also important to remember that increases in the phosphate intake could bind magnesium in the GI tract and produce adverse effects by that mechanism. Supplemental magnesium could conceivably reduce some of the supposed risk of increases in the dietary phosphate/(Ca+Mg) intake ratio, in my opinion, although I can't make any guarantees, at all, about safety in individuals or even in general. All I can offer is my sense of things. Even though supplemental magnesium increases phosphate reabsorption in animals and can decrease parathyroid hormone release (magnesium only increases PTH levels, up to a point, when a person has been grossly deficient in magnesium) [Thumfart et al., 2008: (http://www.ncbi.nlm.nih.gov/pubmed/18701629)], magnesium has been shown to reduce the incidence or extent of calcification in animals taking massive amounts of phosphate. I'll collect some of those articles, but the point is that the use of supplemental magnesium is worthwhile, in my opinion, and is likely to be especially worthwhile in the context of an increase in the phosphate intake, from food or another source, in relation to the intakes of magnesium and calcium, etc. That said, one would want to monitor one's electrolytes and discuss these issues with one's doctor. Magnesium can elevate serum potassium and cause natriuresis (an increase in urinary sodium excretion) at high dosages, even though a high-magnesium diet (Thumfart et al., 2008) decreased urinary sodium loss in animals in that article.

Tuesday, August 25, 2009

Interactions of Adenosine Nucleotide and Creatine Metabolism With Acid-Base Homeostasis and Phosphofructokinase

This article [Mader, 2003: (http://www.ncbi.nlm.nih.gov/pubmed/12527960)] is actually really good, and I was looking through it in more depth. I didn't used to think quantitative modeling would be useful in physiology, but it does have its value. Quantitative information is useful for getting a sense of the magnitudes of different physiological changes, as long as one doesn't expect a living organism to function in ways that are perfectly consistent with the quantitative model. Mader (2003) discussed the fact that the overall rate of glycolytic activity, encompassing the activities of all the glycolytic enzymes in the cytosol, is normally determined primarily by changes in phosphofructokinase (PFK) activity. PFK activity is increased by elevations in the free, cytosolic AMP, ADP, inorganic phosphate (Pi), and citrate, and AMP augments the ADP-mediated activation of PFK (Mader, 2003). A high intracellular, cytosolic pH (pHc) activates PFK and glycolytic activity overall, and an increase in glycolytic activity then will tend to decrease pHc. PFK activity apparently reaches 90-100 percent of its maximal rate at pHc 6.9-7.2 and above (Mader, 2003). But intracellular acidosis [a decrease in pHc to 6.2-6.4 or so (Mader, 2003), especially], decreases the rate of glycolysis to almost nothing (Mader, 2003). Mader (2003) noted that protons [acidity, meaning a high H(+), or H3O(+), concentration] inhibit PFK noncompetitively. On p. 11, Mader (2003) discusses evidence that decreases in pHc cause the creatine kinase (CK) equilibrium (I'm assuming the author is discussing the equilibrium of the cytosolic CK reaction) to favor ATP formation (cause a decrease in the PCr/ATP ratio), and this tends to be accompanied by decreases in the (cytosolic) AMP and ADP concentrations. Presumably, those decreases in the free, cytosolic AMP and ADP concentrations would tend to decrease the overall rate of glycolysis by decreasing PFK activity. Korzeniewski (2006) [Korzeniewski, 2006: (http://www.jbc.org/cgi/content/full/281/6/3057)(http://www.ncbi.nlm.nih.gov/pubmed/16314416); (http://hardcorephysiologyfun.blogspot.com/2009/08/interactions-of-acid-base-homeostasis.html)] found evidence that AMP deaminase activity helps to prevent metabolic acidosis, especially under conditions of hypoxia or during metabolic insults, by decreasing cytosolic ADP and AMP concentrations and thereby reducing glycolytic activity (which tends to decrease the pHc). Those are just some of the mechanisms that buffer the pHc. Ponticos et al. (1998) [Ponticos et al., 1998: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1170516&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/9501090); (http://hardcorephysiologyfun.blogspot.com/2009/05/allosteric-inhibition-of-ampk-by.html)] found that the allosteric inhibition of AMP-activated protein kinase (AMPK) by phosphocreatine (PCr) becomes less pronounced at low pH values, and it's noteworthy that AMPK also activates type 2 PFK (PFK-2) by phosphorylating PFK-2 [(Pelletier et al., 2005: (http://endo.endojournals.org/cgi/content/full/146/5/2285)(http://www.ncbi.nlm.nih.gov/pubmed/15677757?dopt=Abstract)]. That increases fructose 2,6-bisphosphate (FBP) formation by PFK-2, and FBP acts to sustain PFK activity and glycolytic activity, overall. But low pHc values might also tend to decrease AMP and ADP levels (Mader, 2003) and decrease the AMP/ATP ratio, as discussed above, and that decrease in the AMP/ATP ratio would tend to decrease AMPK activation and oppose the disinhibition of AMPK activity that would tend to occur as pHc decreases (Ponticos et al., 1998), in the face of a constant PCr concentration. A decrease in pHc would also, according to Mader (2003), tend to decrease the PCr/ATP ratio. So it sounds like acidosis could decrease the PCr-mediated inhibition of AMPK but, conceivably, also produce an opposing effect and decrease the AMP-mediated activation of AMPK. But Mader (2003) isn't really talking about an increase in ATP levels being a result of acidosis. The idea is that the equilibrium might tend to shift and to favor ATP formation, by what is basically a mass action effect of an increase in [H(+)]. It sounds to me like acidosis would tend to augment AMPK activity, and pharmacological AMPK activation sounds like it could just exert feed-forward activation of glycolysis and serve to maintain a low pHc (acidosis or borderline acidosis). Part of the idea is that AMPK activation then leads to an increase in glucose uptake and that that extra glucose will then be oxidized and buffer the pHc (increase the pHc), but there are problems with viewing things that way. It doesn't make a lot of sense, because strong, physiological (nonpharmacological) AMPK activation tends to be a result of a metabolic insult or of chronic ATP depletion, and the mitochondrial biogenesis/proliferation that tends to result from strong AMPK activation is often maladaptive and pathological (it frequently increases the formation of reactive oxygen species and leads to the formation of mitochondria that don't work properly or that exacerbate the overall degree of heteroplasmy, across all the mitochondria in the cell). Pelletier et al. (2005) noted that AMPK may not be a major factor in maintaining glycolytic activity, however, in light of experiments in mutant mice that display hypofunctional AMPK activity. AMPK also forms heterodimers (or heterooligomers, I suppose, too) with CK (Ponticos et al., 1998) and inhibits CK activity by phosphorylating it. In any case, it sounds like increases in the AMP and ADP concentrations will tend to activate glycolytic activity by various mechanisms.

Mader (2003) also discussed more research showing that the intracellular, free Pi levels are usually 2-4 mmol/kg ww muscle tissue (~ 3.3-6.6 mM), in the heart and skeletal muscles of rodents, and tend to be maintained at those low levels (as discussed in my posting yesterday). I know that hypoxia and metabolic stress are thought to increase free Pi levels in parallel with increases in AMP levels, however. That's fairly well-known. That's one hazard in relying too much on the type of modified, CK equilibrium expression that Mader (2003) offered:

[ATP] + [Pi] <---> [ADP] + [PCr] + [H(+)]

Under hypoxic or ischemic conditions, one can't really predict much by looking at mass-action effects on that multi-reaction equilibrium. And, as is apparent, Mader (2003) noted that the inclusion of [H(+)] in the CK equilibrium expression doesn't mean that H(+) directly participates in the CK reaction. The expression that Mader (2003) included (above), for the sake of deriving other relationships, is a shorthand way of showing the impact, via 12 different intervening reactions, of changes in [H(+)] on CK activity (Mader, 2003). Similarly, Katz (1988) [Katz et al., 1988: (http://www.ncbi.nlm.nih.gov/pubmed/3394819)] used this expression:

PCr ---> Cr + Pi + (s)H(+), where (s) = 0.63 - [(pHcytosolic - 6.0)(0.43)]

Those two expressions don't make much sense to me, though, because they're superficially contradictory. In any case, it's not necessary to get bogged down in the details of deriving simple, equilibrium expressions that may not predict in vivo changes. The "equilibrium" that Katz et al. (1988) was using (above) is not really an equilibrium, though, because it's showing multiple enzymatic reactions, and so is the expression that Mader (2003) uses. Also, Mader (2003) is referring to free, cytosolic [Pi], and it's not clear to me that Katz (1998) is referring to cytosolic or free [Pi]. Lyoo et al. (2003) referred to the same expression that Mader (2003) used, essentially [except Mader (2003) replaced [Cr] with [Pi] (that's like saying they're on the same side of the equilibrium expression but that removing [Cr] from the expression simplifies it), because of various assumptions]:

[Cr] + [ATP] <---> [PCr] + [ADP] + [H(+)] (Lyoo et al., 2003)

Or maybe [H(+)] should be viewed as being a kind of "floating" element, in the equilibrium, that influences the PCr/ATP ratio and other ratios in different ways under different circumstances. Here's my composite expression (this may or may not have validity, but the contradictions in the various expressions, as presented above, are not pleasing for me to see):

[H(+)] <---> ... <---> [Cr] + [Pi] + [ATP] <---> [PCr] + [ADP] <---> ... <---> [H(+)]

In any case, some of those articles are interesting.

Monday, August 24, 2009

General Pharmacological Considerations

This article [Horter and Dressman, 2001: (http://www.ncbi.nlm.nih.gov/pubmed/11259834)] is really interesting, and the authors noted, on the last two pages of text, that the gastric luminal fluid volume can be only 20-30 mL in the fasted state (meaning that the person hasn't ingested anything for 12+ hours, though 14.5 hrs may be required for the stomach to completely empty) and that the USP procedures for evaluating tablet dissolution had been based, at the time the authors wrote the article, in 2001, on nonphysiological pH values (7.5 is not a pH value that's likely to be reached in the GI tract in many people) and surfactant concentrations. The authors also discussed the fact that the rate of dissolution is frequently the most important factor determining the rate of absorption (and, hence, the bioavailability, in many cases). The issue of the luminal fluid volume can be important in determining the rate of dissolution, and the authors noted, for example, that increases in the viscosity of the intestinal luminal fluid, such as in response to food intake, can slow the rate of dissolution and, hence, the rate of absorption. In general, if one wants to maximize the bioavailability of a physiological substance, one should take it on an empty stomach. One might want to spread the total daily dosage out across the day, but it's worthwhile to keep these types of things in mind.

Maximizing bioavailability is not likely to be very important for many supplements, especially if they're in capsule form, etc. For example, I don't think there's any need to try to maximize the bioavailability of encapsulated creatine monohydrate, given that a slight increase in bioavailability is not going to be very important. But, in the case of purine (and pyrimidine) nucleotides, for example, the half-life is extremely short, and the elevation of the concentration of the nucleotide or its metabolites (i.e. other purines) in the systemic circulation is extremely brief, following oral administration. There can be drastic changes in the bioavailabilities of nucleotides, in response to small changes in the rate of dissolution and absorption. Small changes in those parameters have the potential to produce large changes in the bioavailability.

In a related vein, there are still many reports, from articles in the literature and from other sources, of problems with the dissolution of supplements provided in tablet form. It's still a significant problem in the supplement industry, in my opinion. In this context, the issue is not just bioavailability but absorption. If a tablet doesn't dissolve, the absorption and bioavailability will be zero. Consumerlab.com has shown that some tablets essentially don't dissolve at all, and they suggest this complex method for telling if a tablet is going to dissolve (http://www.consumerlab.com/results/hometest.asp). I don't think that's necessary. If a tablet is going to dissolve properly, in my opinion, it should dissolve in a small glass of water in a few minutes. When tablets truly meet dissolution standards, they dissolve in a minute or less. Several years ago, I looked at a lot of reports from Consumerlab.com. They reported that some tablets couldn't be broken with a hammer, and I remember testing some tablets (by putting them in a glass of water) and finding that some of them required 2-3 hours to dissolve. That's obviously not acceptable. I think Consumerlab.com still has some free reports, but I'm not sure. I haven't looked at the site for a few years. A lot of tablets dissolve perfectly, but it's worthwhile to just put a tablet or softgel in a glass of tap water, in my opinion (if one is planning on taking the tablet). I'm saying that that's a way to test if one "sample" tablet of that particular product, from a particular manufacturer, is going to meet some rudimentary dissolution test. If it does, for example, then there's no need to think about it again. In general, though, the dissolution of capsules tends to be more reliable, in my opinion, than the dissolution of those other dosage forms.

Interactions of Acid-Base Homeostasis with the Metabolism of Adenosine Nucleotides, Inorganic Phosphate, and Phosphocreatine

These articles [Katz et al., 1988: (http://www.ncbi.nlm.nih.gov/pubmed/3394819); Korzeniewski, 2006: (http://www.jbc.org/cgi/content/full/281/6/3057)(http://www.ncbi.nlm.nih.gov/pubmed/16314416)] discuss some interesting aspects of acid-base homeostasis, especially in relation to the metabolism of adenosine nucleotides. Katz et al. (1988) derived this equation to show the relative stability of the intracellular pH in the face of changes in the arterial pH:

arterial blood pH (pHart) = 0.23 pHi + 5.43

Korzeniewski (2006) discussed an interesting "equation" that may shed light on the quantitative impact of a shift in the creatine kinase equilibrium (the author actually includes Pi in the expression and refers to it as the Lohman reaction) on intracellular pH. In the creatine kinase equilibrium (:

PCr ---> Cr + Pi + (s)H(+),

where (s) = 0.63 - (pHcytosolic - 6.0) x 0.43

I assume that's 0.63 - [(pHcytosolic - 6.0)(0.43)]

Katz et al. (1988) also found that the intracellular ratio of ATP to inorganic phosphate (ATP/Pi) was about 7 in the hearts of beagles but noted that earlier research had shown the ratio to be about 1.5 in dogs. Katz et al. (1988) also found that *free* intracellular Pi was about 0.8 mM (800 uM) in the beagle heart, and that's much lower than the levels of total, intracellular Pi found in either humans or rats. For example, Brautbar et al. (1983) [Brautbar et al., 1983: (http://www.ncbi.nlm.nih.gov/pubmed/6620852)] found that the total intracellular Pi levels in the skeletal muscles of rats ranged from about 7.5 mM (7500 uM) to 16 mM (16000 uM), and Ambuhl et al. (1999) [Ambuhl et al., 1999: (http://www.ncbi.nlm.nih.gov/pubmed/10561144)] found intracellular inorganic phosphate levels of 31-40 mM in the muscles of humans. Hitchins et al. (2001) [Hitchins et al., 2001: (http://ajpheart.physiology.org/cgi/content/full/281/2/H882)(http://www.ncbi.nlm.nih.gov/pubmed/11454594?dopt=Abstract)] found a value of 2.85 mM for the intracellular Pi in the skeletal muscles of rats and cited research showing values ranging from 2.7 to 4.9 mM in the skeletal muscles of rats. Just looking at the skeletal muscle data from normal rats, on a normal diet (given that the Pi levels in the range of 7.5 mM, as found by Brautbar et al. (1983), were measured in rats given a phosphate-deficient diet), it looks like the total intracellular Pi levels are somewhere between about 3.3 and 20 times the free intracellular Pi values, but it's difficult to make precise comparisons. Brautbar et al. (1983) noted that the value of 0.8 mM is close to the Km values for the binding of Pi to various enzymes, including respiratory chain enzymes (collectively, apparently). Brautbar reported very high total cellular protein contents (~ 290 mg protein/g ww muscle tissue), and the usual conversion factor assumes that there is about 100-150 mg protein/g ww tissue. That could substantially alter some of those conversion factors.

Korzeniewski (2006) found evidence that the deamination of AMP to inosine monophosphate (IMP), by AMP deaminase, can help to prevent metabolic acidosis during hypoxia or exercise. One reason for that, as discussed by Korzeniewski (2006), is that an abundance of ADP and, especially, AMP can directly or indirectly activate glycolytic enzymes, such as phosphofructokinase, and thereby produce intracellular acidification [Mader, 2003: (http://www.ncbi.nlm.nih.gov/pubmed/12527960)], and Korzeniewski (2006) also noted that, during hypoxia, more ADP is consumed through glycolytic activity than through respiration (i.e. the phosphorylation to ATP) but that the total intracellular ADP levels are more consistently maintained, under a variety of different cellular conditions, than the ATP levels are. Korzeniewski (2006) cited research showing that creatine depletion can lead to reductions in AMP deaminase activity, and that depletion of total intracellular creatine could conceivably impair the resistance of the brain (or muscle) to hypoxic or ischemic insults.

Incidentally, an alternate interpretation of the MRS data showing that exogenous creatine, SAM-e, and triacetyluridine can increase the phosphocreatine/nucleoside triphosphate (PCr/NTP) ratios in the brains of humans would be to say that all of those compounds have been found to increase the intracellular adenosine nucleotide levels [Ronca-Testoni et al., 1985: (http://www.ncbi.nlm.nih.gov/pubmed/4087306); (http://hardcorephysiologyfun.blogspot.com/2009/01/details-on-nucleotides-bioavailability.html); (http://hardcorephysiologyfun.blogspot.com/2009/05/uridine-induced-maintenance-of-glycogen.html)] . Researchers have also suggested that exogenous adenosine, especially (and also guanosine), exert cardioprotective effects and increase the intracellular PCr/Cr ratio (and, probably, the PCr/ATP ratio) by maintaining the free ADP levels, particularly intramitochondrially, during ischemia [Satoh et al., 1993: (http://www.ncbi.nlm.nih.gov/pubmed/8173706); (Meyer et al., 2006: (http://www.jbc.org/cgi/reprint/281/49/37361)(http://www.ncbi.nlm.nih.gov/pubmed/17028195?dopt=Abstract), discussed here: (http://hardcorephysiologyfun.blogspot.com/2009/03/creatine-cr-phosphocreatine-pcr-and.html)] . One might expect an increase in the intracellular ADP levels, even in the absence of an increase in the adenylate charge [(ATP + 0.5ADP)/(ATP+ADP+AMP)], to be accompanied by, either in opposition to or independently of the mass-action effect that an increase in the intramitochondrial creatine levels has been suggested to have, on the PCr/ATP and PCr/Cr ratios (http://hardcorephysiologyfun.blogspot.com/2009/08/interactions-in-metabolism-of-creatine.html) (the suggestion has been that creatine usually increases those ratios via a mass action effect, but I've cited research showing the opposite effect in past postings [Ceddia and Sweeney, 2004: (http://jp.physoc.org/cgi/reprint/555/2/409)(http://www.ncbi.nlm.nih.gov/pubmed/14724211?dopt=Abstract), cited and discussed here: (http://hardcorephysiologyfun.blogspot.com/2009/03/creatine-cr-phosphocreatine-pcr-and.html)]), a reduction in the ATP/ADP ratio and an increase in the PCr/NTP ratio. Korzeniewski (2006) discussed the fact that the ADP pool is more consistently maintained during metabolic insults, as discussed above, and creatine is known to stimulate respiration by, in large part, helping to maintain the intramitochondrial ADP levels and the recycling of ADP, back into the mitochondrial matrix. That function of the phosphocreatine "shuttle" is especially important during metabolic insults.

Sunday, August 23, 2009

Kinetics of the Nonenzymatic Hydrolysis of ATP in Aqueous Solution at Various pH Values

These are some articles that provide data on the kinetics of the nonenzymatic hydrolysis of ATP in aqueous solution [Malhotra and Sharma, 1980: (http://www.new.dli.ernet.in/rawdataupload/upload/insa/INSA_1/20005bb0_589.pdf); Couture and Ouellet, 1957: (http://article.pubs.nrc-cnrc.gc.ca/ppv/RPViewDoc?issn=1480-3291&volume=35&issue=11&startPage=1248); Friess, 1952: (http://pubs.acs.org/doi/abs/10.1021/ja01136a016); Seno et al., 1975: (http://joi.jlc.jst.go.jp/JST.Journalarchive/bcsj1926/48.3678?from=Google)]. The article by Friess (1952) provides data on the hydrolysis of tripolyphosphate, which is a three-phosphate "polymer," but it's probably valid to assume that the tripolyphosphoryl- moiety of ATP would undergo nonenzymatic hydrolysis, to ADP and inorganic phosphate (Pi), at a similar rate under the same conditions. And other authors have cited the Friess (1952) article with that assumption in mind. Those articles show that, after 1 hour in solution, at various temperatures, between 91.2 and 99.9 percent of the ATP is still intact (at most, only 8.8 percent of the initial amount/concentration would degrade within 1 hour, based on the data from those articles). This means that storing ATP in solution for days wouldn't be a good idea, but it would be possible to wait for the non-enteric-coated preparations to dissolve in water and then drink the water, to maximize the bioavailability. This can significantly increase the bioavailability of physiological substrates, as shown, in the case of creatine monohydrate, by Deldicque et al. (2008) [Deldicque et al., 2008: (http://www.ncbi.nlm.nih.gov/pubmed/17851680)], discussed here (http://hardcorephysiologyfun.blogspot.com/2009/03/adenosine-and-guanosine-in-animal.html); (http://hardcorephysiologyfun.blogspot.com/2009/04/increase-in-nucleotide-absorption-and.html)]. The rate of entry into solution, in the intestinal luminal fluid, can be a major factor that determines the rate of absorption and, hence, the bioavailability of a high-solubility compound, such as ATP disodium. If a tablet takes 15 minutes to completely dissolve, that has the potential to drastically reduce the bioavailability of ATP disodium. It's partly because of the extremely short half-life of plasma adenosine. If "pre-dissolution" enhanced the bioavailability of creatine, with its much longer half-life, one would expect the rate of dissolution to be even more important for ATP disodium. There could also be greater bioavailability because of a "solvent drag" effect, even though that's mainly been discussed in the context of the enhancement in the rates or extents of absorption of cations, such as magnesium or aluminum, because of the "hydration shells" around those cations. Some ATP is going to be absorbed by passive diffusion, though, and, given the relevance of solvent drag to the bioavailabilities of compounds that are known to be absorbed by passive, paracellular diffusion, it's possible that the phenomenon is relevant to the rate of absorption of ATP [see Kristl and Tukker, 1998: (http://scholar.google.com/scholar?hl=en&q=%22solvent+drag%22+bioavailability)].

I put the calculations below, but it's noteworthy that some researchers have suggested that ATP should be provided in enteric-coated preparations to prevent degradation, prior to absorption, in the stomach or small intestine. Bours et al. (2007a) [Bours et al., 2007a: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1913056&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/17578566)] didn't suggest that but nonetheless did use enteric-coated ATP, at a dose of 4 grams/day (4000 mg/day), and suggested that the failure of the ATP to attenuate the toxic effects of indomethacin had occurred because the enteric-coatings on the tablets had dissolved at intestinal sites distal to the site of the toxic effects of indomethacin (Bours et al., 2007a). The authors noted that, in a previous study, ATP administered in solution, directly into the duodenum in humans (at 30 mg/kg bw, twice within a 24-hour period, for a total of 4200 mg/"day," for a 70-kg human) [Bours et al., 2007b: (http://www.ncbi.nlm.nih.gov/pubmed/17301652)], had produced protective effects against the toxic effects of indomethacin (Bours et al., 2007a). The apparent failure of enteric coatings to partially dissolve, in a reliable manner, and allow for the dissolution of tablets [the tablets themselves, beneath the enteric coatings, often fail to dissolve in reasonable amounts of time, too, as laboratory tests have shown, and this is likely to severely limit the bioavailability of something like ATP (and the adenosine derived from it, more precisely)] is not surprising and is consistent with the fact that the pH in the intestinal luminal fluid does not reliably exceed 6.5 until the distal ileum [Fallingborg et al., 1999, discussed here: (http://hardcorephysiologyfun.blogspot.com/2009/04/increase-in-nucleotide-absorption-and.html)].

Malhotra and Sharma (1980) found that the hydrolysis of ATP had followed first-order kinetics, and they found an overall first-order rate constant (I'll call it k1) of 1.52 x 10^(-5) min^(-1) (1.52E-5 min^-1), at 50 degrees C and pH 9.00, and that value for k1 is an overall rate constant that encompasses all of the individual rates of reaction for the individual species that ATP exists as, such as ATP(4-) and HATP(3-), etc. One can write an approximate, general form of the first-order rate equation for the degradation of ATP by keeping the pH constant (more specifically, the concentrations of the hydroxide anion or of protons [the hydrogen ion concentration, H(+) or H3O(+)], respectively, for the base-catalyzed or acid-catalyzed hydrolytic reactions) and, assuming one has calculated the individual rate constants, combining those individual rate constants into a single one (k1). For example, Malhotra and Sharma (1980) simplified the longer form of the rate equation to this:

rate [in mol/(L min) or M/min] = r = k5[ATP(4-)] + k5'[ATP(4-)] ln [OH(-)] = (I'm adding this, as used by Seno et al., 1975, to describe the "pseudo-" first-order kinetics of the nonenzymatic hydrolysis of ATP, meaning that the [H(+)] or [OH(-)] is known and has been incorporated into the pseudo first-order rate constant that I'm calling k1):

= -d[ATP]/dt = k1[ATP], where k1 = something like k5 + k5'ln [OH(-)]

The integration of that gives this (technically, I guess two constants could show up during integration and might have to be combined and incorporated into a final, final rate constant, but I don't need to be precise and am ignoring that):

ln [ATP]t = (-k1)t + ln [ATP]0 (where [ATP]t = the ATP concentration at, time t, in M (mol/L), [ATP]0 = the initial ATP concentration in M (mol/L), t = time elapsed, in min, and k1 = the pseudo first-order rate constant, in min^-1)

So [ATP] = e^((-k1)t + ln [ATP]0), and, at t = 60 minutes, ([ATP]/[ATP]0) x 100 = the percentage of the initial ATP concentration remaining intact after one hour

Couture and Ouellet (1957) collected data showing first-order kinetics, but the way they reported some of the data led me to want to do the calculations using both first-order and zero-order equations. The zero-order rate equation is this:

rate [in mol/(L min) or M/min] = k0 (in this case, the units of the rate constant are M/min, not min^-1, and, given that the rate is constant, are the same as the units of the rate) = -d[ATP]/dt

The integration of that gives this zero-order equation:

[ATP]t = (-k0)t + [ATP]0

So here are the percentages of ATP remaining intact, in solution, after 1 hour:

From Malhotra and Sharma (1980) (I'm using the k value of 1.70E-5 min^-1 (which is similar to the pseudo first-order rate constant), on p. 591, instead of the overall rate constant the authors listed on p. 593, because the authors listed an [ATP]0 value on p. 591:

[ATP]t=60 = e^((-k1)t + ln [ATP]0) = e^((-1.70E-5)(60) + ln (9.218E-3)] = 0.009209 M ATP

([ATP]t=60/[ATP]0) x 100 = ((9.209E-3)/(9.218E-3)) x 100 = 99.902 percent remaining after 1 hour

I'm going to use the kapp value, in Fig. 4 on p. 3679 of Seno et al. (1975), as k1, in this case, and assume it's similar enough to the k0app pseudo first-order rate constant the authors refer to in their equations [the authors mentioned that the kapp and k0app constants were similar under other conditions, and this is essentially the same assumption I mentioned above (that I can ignore any constants that would show up during integration, for the sake of this crude analysis)]. Note that the authors express the constant in these terms:

kapp x 10^7 (in sec^-1) = a number on the graph = 10 at 50 degrees C at pH 4

So kapp = k1 = 10/(1E7) = 1E-6 sec^-1 = (60 sec/1 min)(1E-6/sec) = 6E-5 min^-1

[ATP]t=60 = e^((-k1)t + ln [ATP]0) = e^((-6E-5)(60) + ln (2E-3)] = 1.99281E-3

([ATP]t=60/[ATP]0) x 100 = ((1.99281E-3)/(2E-3)) x 100 = 99.641 percent remaining after 1 hour

The article by Couture and Ouellet (1957) is in French, but I can still glean the data from it (vitesse, in French, means velocity, V, or rate of reaction). Note that the authors report the rate of reaction, not the rate constant, on the dependent axis of Fig. 3, on p. 1251. So:

rate = V = k1[ATP] = -d[ATP]/dt, and I'm going to assume that they're referring to the initial rate. In that case, as shown in Fig. 3, V x 10^9 = 30 m/sec (at pH 8.82), and V = 30/(1E9) = 3E-8 M/sec = 1.8E-6 M/min. So:

Vinitial = 1.8E-6 M/min = k1[ATP]0 = k1(1.175E-3), and k1 = (1.8E-6)/(1.175E-3) = 1.532E-3 min^-1

So:

[ATP]t=60 = e^((-k1)t + ln [ATP]0) = e^((-1.532E-3)(60) + ln (1.175E-3)] = 1.0718E-3 M

([ATP]t=60/[ATP]0) x 100 = ((1.0718E-3)/(1.175E-3)) x 100 = 91.2 percent remaining after 1 hour

I did a zero-order calculation with that reaction rate (the one reported in Fig. 3), and the calculation still showed that about 90 percent of the ATP was remaining after 1 hour. But I think the constant I calculated from the reaction rate, reported in Fig. 3 of Couture and Ouellet (1957), is probably more or less "correct" or valid.

Those data suggest that ATP disodium doesn't need to be prepared in enteric-coated tablets.

Saturday, August 22, 2009

Monobasic vs. Dibasic Sodium Phosphate in Acid-Base Homeostasis

In this article [Kirschbaum, 1998: (http://www.ncbi.nlm.nih.gov/pubmed/9487238)], Kirschbaum (1998) noted that the large dosages (15-23 grams of phosphate, given once or twice a day, in many cases) of dibasic and monobasic sodium phosphate can cause acidosis, in part, because the ratio of H2PO4(-) (monobasic) to HPO4(2-) (dibasic) in some sodium phosphate preparations can be 0.19, and that contrasts with the ratio of H2PO4(-)/HPO4(2-) of 4:1 (or 4) that normally exists in the extracellular fluid in vivo at pH 7.4 (Kirschbaum, 1998). Kirschbaum (1998) argued that the administration of large amounts of sodium phosphate preparations that exist in that type of "acidifying" ratio of monobasic/dibasic phosphate species will "consume" bicarbonate because of the normally-acidic pH of the urine (Kirschbaum, 1998 gives a figure of 6.0, and the urinary pH can be as low as 4.0-4.5). So it's both the relatively high amount of the "acidifying" species [H2PO4(-)] and the high rate at which that species enters the extracellular fluid that has the potential to cause severe problems. The elevation in the anion gap means that there's more phosphate (an unmeasured anion, meaning one that is measured, in a blood test, but not used to calculate the anion gap) relative to bicarbonate (a "measured" anion that's used in the anion-gap calculation), because the acid from the H2PO4(-) has neutralized some of the bicarbonate. MacKay and Oliver (1935) [MacKay and Oliver, 1935: (http://jem.rupress.org/cgi/reprint/61/3/319.pdf)(http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2133223)] noted that supplemental NaH2PO4 or KH2PO4 (monobasic sodium phosphate or potassium phosphate) was likely to have been acidifying and that Na2HPO4 or K2HPO4 was likely to have been alkalinizing. I've seen articles, however, showing that derangements in phosphate homeostasis can be associated with much less predictable disturbances in the serum osmolarity and in acid-base homeostasis.

I scaled the dosages used in that article by MacKay and Oliver (1935), and the "control diet" provided the rats with a dosage of phosphate that scales to a human dosage of 4873 mg of phosphate per day. The dosages that were used to cause kidney damage in the rats were 5-10 times that dosage. I've done scaling calculations for a few articles that actually list the molecular formulas (such as HPO4, etc.) of the dietary supplements and constituents used in animals (that makes me more confident that phosphate means phosphate and not phosphorus, etc.), and the numbers I've gotten have generally agreed with the numbers that Heaney (2004) [Heaney, 2004: (http://www.mayoclinicproceedings.com/content/79/1/91.full.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/14708952), discussed here: (http://hardcorephysiologyfun.blogspot.com/2009/08/more-about-phosphate-metabolism.html)] came up with, in his comparisons of animal and human dosages. The amounts of phosphate in the baseline, control diets (unsupplemented with phosphate) of animals tend to be comparable to a scaled human dosage 4000-6000 or more mg/day.

But one thing to consider is that there might just be some time required to adjust to the acid-base effects of something like Na2HPO4, which is in that supplement I mentioned, or another source of phosphate (even phosphate in milk). A lot of the articles on milk-alkali syndrome discuss acid-base changes in very vague terms, but there are lots of articles showing that the consumption of massive amounts of milk can produce alkalosis and other acid-base disturbances. I'm not sure how clear it is that the phosphate in milk contributes to those effects, but it's a possibility, in my opinion.

But the point is that there's no downside to gradually increasing the dosage of a source of exogenous phosphate, to allow time for the kidneys and other cells to adjust to the changes, etc. That's one reason I don't think the intermittent "phosphate loading" approach is a good idea. One of the articles I discussed in a recent posting shows that 2-3 days can be required for cells in the kidneys and other organs to adjust to an increase in the phosphate intake. Most people probably wouldn't be aware of or require much of a period of time to re-establish normal serum and extracellular fluid electrolyte levels, etc., in response to an increase in phosphate intake [i.e. for decreases in the densities of phosphate transporters on the luminal (apical) membranes of proximal tubule epithelial cells to occur, leading to an increase in phosphate excretion], but one strategy would be to dissolve the source of phosphate in water and drink fractionated amounts of it, as discussed by Giesecke, 1990 [discussed here, by me: (http://hardcorephysiologyfun.blogspot.com/2009/03/enhancing-safety-and-minimizing.html)]. If a person has kidney disease or liver disease or any disease state, however, it would be worthwhile to exercise extra caution, even after discussing these things with one's doctor. One thing that leads me to suggest the potential need to allow time for those types of acid-base homeostatic changes to occur, in the context of an increase in the phosphate/calcium ratio in one's diet, etc., is that I was noticing the striking overlap of the symptoms that can accompany hypophosphatemia (in the context of parenteral nutrition, etc.), in case reports I've been reading, and the side effects that can accompany phosphate supplementation. One way of interpreting that overlap would be to say that the cells in the kidneys and other organs have become adapted to low intracellular phosphate levels and cannot immediately respond to an acute increase in phosphate availability. So the parenteral nutrition acutely drives phosphate into cells and can sometimes cause acute hypophosphatemia with electrolyte abnormalities [producing the "pedal edema," or edema in the feet, plasma volume expansion, or dyspnea (shortness of breath), etc.], and phosphate supplements have the potential to produce the same electrolyte abnormalities and abrupt increase in intracellular phosphate. And both hyperphosphatemia and phosphate supplementation have sometimes been associated with pedal edema or plasma volume expansion, dyspnea, etc. The overlap of the symptoms associated with phosphate depletion and phosphate "excess" is considerable, in my opinion, and has the potential to create confusion. Also, pedal edema is known to be associated with congestive heart failure, and researchers have found that a reversible form of congestive heart failure can occur in some people who have hypophosphatemia [Darsee and Nutter, 1978: (http://www.ncbi.nlm.nih.gov/pubmed/363007)]. To the extent that an exogenous source of phosphate could produce transient plasma volume expansion, the tendency could be to erroneously attribute transient derangements in electrolytes or in acid-base markers to pathological effects of phosphate. In other cases, however, some of the derangements might be correctly attributable to pathological effects of phosphate. I'm just saying that the presumed "hypersensitivity" of mechanisms aimed at phosphate retention, in the context of chronic phosphate depletion, would suggest the need for making gradual changes, if any, under a doctor's supervision, especially past a certain point or in people in disease states.

Friday, August 21, 2009

Case Reports of Myopathy in Hypophosphatemia

It's easy to look at this article [Schott and Wills, 1975: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=491911&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/1151410)], in which Schott and Wills (1975) described a person who had had hypophosphatemia and, as a result, had become almost unable to walk from muscle weakness, and say that this type of thing couldn't occur in the present day. But this type of thing could, arguably, more easily happen in today's medical system than in 1975 (especially if someone described the symptoms in a slightly different way). It's worth noting that the authors, Schott and Wills (1975) were not the ones who had, on two occasions, dismissed the person's symptoms as having been "hysterical" in nature. Before Schott and Wills (1975) saw the person, some other doctors had seen her and had not been able to find anything objectively wrong with her. Today, a doctor might only be able to spend ten minutes with his or her patients, because of insurance issues and everything, and refer to the complaints of back pain and muscle weakness as being "somatic symptoms," etc. I don't usually do this, but here are a couple of excerpts:

"Also at that time she became aware of muscular weakness, initially causing difficulty in climbing high steps and in manoeuvring her legs when getting into a car. The weakness of both thighs subsequently progressed, and was associated with intermittent hip and low back pain. She was investigated in another hospital one year before admission here, when proximal muscle weakness and a waddling gait were noted, together with brisk tendon reflexes and bone tenderness on palpation. An electromyogram performed at that time was normal, and the only significant abnormal investigation found was a low renal threshold for glucose. The symptoms were considered to be hysterical and she was discharged. Her weakness, however, became more profound, and she had to pull herself upstairs, a task that became increasingly difficult with the development of proximal weakness in the arms, and for six months before admission she had been unable to raise her arms above her shoulders. She was admitted at that time to a second hospital for investigation, and was again thought to be psychoneurotic and no specific therapy was prescribed. She continued to deteriorate, commenced walking with a Zimmer frame, and was admitted to this hospital for further assessment. On direct questioning, she had noted that her nails had become brittle, and that she had had a tendency to vomit occasionally over the preceding 10 years. Her weight had fallen by about 13 kg over four years, although she had always eaten an adequate and normal diet. Her sister reported that the patient had 'shrunk' over the preceding two years" (Schott and Wills, 1975, p. 298).

The muscle weakness could have been partly a result of neuropathy induced by phosphate depletion [(http://scholar.google.com/scholar?hl=en&q=phosphate+hypophosphatemia+neuropathy+OR+neuropathic); (http://scholar.google.com/scholar?hl=en&q=phosphate+muscle+weakness+hypophosphatemia+neuropathy+OR+neuropathic)], given that the authors of some of those articles, in the search results, have described neurological problems resulting from phosphate depletion. The muscle weakness and exercise intolerance that can occur in phosphate depletion [(http://scholar.google.com/scholar?hl=en&q=phosphate+hypophosphatemia+exercise+intolerance); (http://scholar.google.com/scholar?hl=en&q=phosphate+muscle+weakness+hypophosphatemia)] are reminiscent of the types of symptoms that people experience in mitochondrial disorders, as discussed in past postings. Some interesting articles came up in those searches. I've only looked at the abstracts so far, but the authors of this one described a person who had had fatigue and exercise intolerance that were suggestive of some mitochondrial or bioenergetic pathology [Land et al., 1993: (http://www.ncbi.nlm.nih.gov/pubmed/8400863)]. This is another one showing the association of phosphate depletion with poor insulin sensitivity [Haap et al., 2006: (http://www.ncbi.nlm.nih.gov/pubmed/16391583)], and Haap et al. (2006) found that serum phosphorus correlated positively with a marker of insulin sensitivity. The authors mention, in the abstract, that one can't definitely say that the higher phosphate availability causes the cells to become more responsive to insulin, but it's known that ATP depletion in cells can reduce the cells' insulin sensitivities (http://scholar.google.com/scholar?hl=en&q=intracellular+ATP+insulin+sensitivity).

As far as that case report goes, however, it's also worth noting that there are many old articles describing a higher frequency of cavities ("dental caries") in the context of phosphate depletion, and there are old articles describing the "anticariogenic" effects of an adequate phosphate intake (within the normal range of dietary intakes) (http://scholar.google.com/scholar?hl=en&q=sodium+phosphate+caries+OR+anticariogenic+OR+cariogenic). On the one hand, that's not surprising. Everyone knows that hydroxyapatite contains phosphate, etc. But, in the vast majority of the research that comes out these days, there's this overriding assumption that dietary phosphate is bad for bones and is going to cause calcium to be lost from the bones, etc. Obviously, I think one should be careful with sources of phosphate and not take large amounts of any source of phosphate, from food or otherwise, at any one time, so as to allow the kidneys to filter it and to allow the phosphate to be transported into cells. But the information on the utilizable phosphate content of many vegetable/plant-based foods is probably very inaccurate, in my opinion. Phytates may provide very, very little utilizable phosphate, as discussed in past postings, but these are just my opinions.

Thursday, August 20, 2009

Free-Wheeling Discussion of L-Methylfolate

In this article [Di Palma et al., 1994: (http://cat.inist.fr/?aModele=afficheN&cpsidt=4093099)], Di Palma et al. (1994) used 90 mg/day of methylfolate to treat depression and discussed research on the use of 50 mg/day of methylfolate [including Guaraldi et al., 1993: (http://www.ncbi.nlm.nih.gov/pubmed/8348200)]. There's more research on the use of 50 mg per day, but I think some are abstracts. Di Palma et al. (1994) also keep repeating that the patients who participated in those trials were "normofolatemic" (i.e. displayed normal serum folate levels). The main issue I would wonder about is the use of supplemental folic acid per se, past a certain dosage. It seems to cause some strange effects, at dosages above 10 or so mg/day, in the long term. It's not that it's really toxic [except in people who have dihydropteridine reductase deficiency, an inherited genetic disorder (http://hardcorephysiologyfun.blogspot.com/2009/05/evidence-that-reduced-folates-can-serve.html)] but that it may compete with methylfolate for entry into the brain (as suggested by the authors of one of those articles on dihydropteridine reductase deficiency) and may compete with tetrahydrobiopterin (BH4) for binding to tyrosine hydroxylase or the nitric oxide synthases. But if one is not taking supplemental folate or has a serum folate value that is within the normal range (due to some dosage of folic acid below 5 mg or something), I don't think the serum folate level is really relevant to the effects of methylfolate. The serum folate range is very small, and a serum folate level within the normal range of values is unlikely, in my opinion, to produce anything close to "saturation" of the intracellular binding sites for the intracellular total folates (the concentration of binding sites, in the liver, is something like 150-200 uM, at least, excluding the binding sites on the pterin biosynthetic enzymes). But one would want to discuss this type of thing with one's doctor, especially if one were taking any medications. In that past posting I linked to, the authors of one of the articles go into all the research on the use of reduced folates (such as methylfolate) and oxidized folates (folic acid) in people who have dihydropteridine reductase deficiency, and the authors provided a lot of evidence that reduced folates tend to actually be safer than folic acid, in terms of their effects on the brain. That seems paradoxical, at first glance, because reduced folates are more potent, from the standpoint of their effects on cell proliferation, etc. But they exert less of a pro-convulsant effect, for example, and don't cause the neurological symptoms that folic acid does in people with dihydropteridine reductase deficiency (DHPRD) and in some forms of phenylketonuria, I think, in which reduced folates have been used instead of BH4 (this was before BH4 was approved to treat BH4 depletion in phenylketonuria). That's probably because of the lack of capacity of folic acid to serve as a BH4 analog, except to the extent that some of it can be reduced (converted into reduced folates intracellularly).

One thing I can think of that would be a downside of high dosages of methylfolate would be the potential to mask vitamin B12 deficiency, and I think there's reason to consider dosages of methylcobalamin in the 1-5 mg/day range, in combination with methylfolate, even assuming that the serum B12 is normal. The other thing is that methylfolate can probably serve as a BH4 analog in humans (it seems to, in my opinion), and, to the extent that it can, that could conceivably cause some sort of abnormal nitrergic effects at high dosages. That doesn't seem to be much of a downside, in my opinion, although a person who had an inflammatory disease or something like multiple sclerosis could experience some sort of mixture of bad and good effects from methylfolate at high dosages, as a result of the BH4 "mimesis" at higher dosages. But the supposed nitrergic effect is also likely to contribute to the dopaminergic/noradrenergic effects of methylfolate, given that, for example, L-arginine releases dopamine in a BH4-dependent manner (although it's more complicated than one might think) [I'm pretty sure it's the Liang et al. (1998) article: (http://scholar.google.com/scholar?hl=en&q=%22L-arginine%22+dopamine+tetrahydrobiopterin)]. So it's not really likely to be a bad effect, up to a certain dosage (I mean that different people might experience differing degrees of nitrergic effects from methylfolate, and those effects might be problematic in some people and not in others, etc.). I tend to think that combining methylfolate with L-arginine might cause undesirable side effects, at high dosages of either one, and I tend to think the nitrergic effects of methylfolate in the brain (the supposed nitrergic effects) are likely to be "better" than those of arginine. L-arginine just seems to produce inconsistent effects or to produce a mixture of mood-elevating and mood-worsening effects. It's not a matter of toxicity, but it just doesn't seem to produce very predictable effects. The research shows that inconsistency.

The main issue, in my opinion, would be to exercise some caution in combining high dosages of methylfolate with noradrenergic or dopaminergic medications, because the BH4-mimicking effect of methylfolate would be expected to augment noradrenergic/dopaminergic effects. It's not that it's really about neurotoxicity, in my opinion, but that methylfolate could just augment the effects of those medications and cause agitation or insomnia or nervousness, etc. That's the main effect of BH4. Its predominant effect is to enhance dopaminergic and noradrenergic transmission in the brain, and methylfolate really seems to begin serving as a BH4 analog at high dosages. For example, they've used BH4 to treat dopa-responsive dystonia [(http://scholar.google.com/scholar?hl=en&q=%22dopa+responsive%22+dystonia+tetrahydrobiopterin+supplement); (http://scholar.google.com/scholar?hl=en&q=%22dopa+responsive%22+dystonia+tetrahydrobiopterin)], and I think that's mainly or always caused by mutations affecting BH4 biosynthesis. I can't say that methylfolate definitely serves as a BH4 analog at higher dosages, and I've never taken BH4 and therefore can't make even a subjective comparison (BH4 has been used to treat depression, in a number of small studies, is available by prescription in the US, and is sold under the generic name of sapropterin dihydrochloride; but it's not available over-the-counter and seems unlikely to be something that would be covered by prescription). L-methylfolate is available by prescription or over-the-counter, as discussed in past postings.

I tend to think that the way methylfolate works is to cause something resembling saturation of the binding sites in the liver (and maybe also the brain or other extrahepatic tissues), at the "lower" range of dosages (I mean that this quasi-saturation might begin to emerge at, say, 15 or 20 mg/day or maybe less), and then begin to serve as a BH4 analog at higher dosages. In my experience, the effects of L-methylfolate are greater at the higher dosage range, but there's some dosage range at which one would experience diminishing returns from further increases in dosages. But Di Palma et al. (1994) found, incidentally (without seeking to find any such effect), that the patients' liver enzymes were decreased by the 90 mg/day dosage. If methylfolate were toxic at that dosage, one wouldn't expect to see a reduction in serum liver enzyme levels (the reductions were significant, and I don't have the article in front of me right now) and might expect to see a worsening of liver function (given that the intracellular total folate levels in the liver would be higher than those in any other tissue). But it basically improved liver function, to some extent, in those patients. It's never been proven to treat liver disease or any other disease, however, but I'm just saying that that suggests that the higher dosages are unlikely to be "toxic." I can't make any definitive statements about safety, however, on an individual basis, and one would want to discuss that type of thing with one's doctor. Di Palma et al. (1994) didn't think 90 mg/day was toxic and mentioned that, but they didn't necessarily have any basis for saying that (other than their clinical experience in psychopharmacology). But anyone with any health condition would obviously want to be extra careful and discuss the matter in more depth with one's doctor. For example, low dosages of reduced folates have sometimes produced anticonvulsant effects in people who have specific, probably-genetic forms of epilepsy due to cerebral folate deficiency, but higher dosages could conceivably lower the seizure threshold in the way that many antidepressants do (as far as I know, antidepressants tend to lower the seizure threshold, almost without exception). In general, in my opinion, L-methylfolate is useful (and much more useful than folic acid).