Monday, September 28, 2009
Glutamine as an Energy Substrate in the Liver
But I was going to mention that there are also lots of articles showing that glutamine can prevent or ameliorate experimental liver disease and pancreatic exocrine dysfunction in animals, etc. [see this article and the related articles, etc.: (http://www.ncbi.nlm.nih.gov/pubmed/1971031)]. The different effects probably depend on a lot of different variables, such as the dosage and availability of phosphate and uridine, the degree of damage to the mitochondria, the degree of insulin resistance, and other factors. That's part of the reason something like glutamine would probably be more likely to be useful in diffuse brain injuries, such as traumatic brain injuries induced by concussive traumas, or milder ischemia, rather than as an adjunctive approach in severe strokes that are characterized by "raging" cores of necrotic tissue with no functional mitochondria that could oxidize glutamine-derived 2-oxoglutarate, etc. In contrast, purine nucleotides would be expected to be more "durable" in terms of their usefulness in damaged cells, given that purine nucleotides can provide ribose-5-phosphate for the formation of glycolytic intermediates and activate phosphofructokinase in the cytosol and be degraded into uric acid, etc. That presupposes that adenosine receptor activation isn't so deranged as to preclude their usefulness, but a lot of the research on adenosine in strokes doesn't make a distinction between the intracellular and extracellular actions and metabolic fates of adenosine-derived nucleotides. Adenosine that isn't infused too rapidly is not going to just behave like an A1 adenosine receptor agonist, for many reasons that I can't get into. But there are some situations in which there's severe damage or derangement of adenosine receptor signalling, and rapidly-administered, intravenous adenosine can be detrimental under those extreme circumstances. But obviously one would want to discuss these things with one's doctor, even though no one's talking about using intravenous adenosine in a willy-nilly fashion.
Glutamine and Uridine in Hexosamine Biosynthesis and Glycosylation Reactions
There are lots of interesting articles that have shown the roles that glutamine (GLN)-mediated increases in the O-glycosylation, with beta-N-acetylglucosamine, of serine and threonine residues on proteins can play in mediating either the protective effects or undesirable effects of GLN (http://scholar.google.com/scholar?q=glutamine+hexosamine+ischemia&hl=en). GLN is a substrate of glutamine: fructose-6-phosphate amidotransferase (GFAT), and that enzyme forms glucosamine-6-phosphate and glutamate [Broschat et al., 2002: (http://www.jbc.org/content/277/17/14764.full)(http://www.ncbi.nlm.nih.gov/pubmed/11842094?dopt=Abstract)]. In any case, the overall point is that some of the GLN-mediated protection against damage due to ischemia have been shown to be a result of the augmentation of hexosamine formation by GLN (see that first search), but I've also seen articles showing that GLN can sometimes worsen the course of experimental fatty liver disease or the courses of other disease states, in animals, in which insulin resistance features prominently. There's a large amount of research showing that glucosamine can cause insulin resistance in animals (http://scholar.google.com/scholar?hl=en&q=glucosamine+liver+OR+insulin+OR+ATP) and other undesirable effects, but, under normal circumstances, I remember reading that only about 3 percent of the intracellular GLN in cells in the liver is metabolized into glucosamine. As I've discussed in past postings, the formation of uridine diphosphohexosamines can sometimes sequester large amounts of uridine in ways that is undesirable, in animal models of liver disease, and GLN can also serve as a substrate for de novo uridine biosynthesis. That's not generally something that one wants to accelerate in an unregulated way. But my point would be that, at reasonable dosages, the formation of glucosamine or carbamoyl phosphate, as a precursor of orotate and uridine, from GLN would be processes that would be subject to substantially more regulation than the formation of hexosamines and orotate from exogenous glucosamine and...orotate would be.
Another important point is that it's necessary to take into account the potential for GLN-mediated decreases in glutamine synthetase (GS) activity, with regard to the supposed ATP-sparing effects of that suppression, to occur and to consider the effects of GLN-derived 2-oxoglutarate on mitochondrial ATP formation. One can say that the effects of GLN are mediated by glycosylation during ischemia, but how was the uridine pool preserved during ischemia? The GLN-mediated preservation of ATP could indirectly preserve the UDP-N-acetylglucosamine and overall UDP-hexosamine pools during ischemia, given that ATP depletion tends to lead to loss of pyrimidine nucleosides, either by export or degradation. That's only one example. Not surprisingly, there's actually some research showing that some of the protective effects of uridine in cultured astrocytes, or something like that, are mediated by glycosylation of various proteins, and I remember downloading a paper that shows that glycosaminoglycan formation is more sensitive to increases in uridine availability than other UDP-sugar-dependent or UDP-hexosamine-dependent glycosylation reactions are. Again, however, one has to consider the increases in glucose uptake that exogenous uridine can produce. Did the uridine-induced increases in protein glycosylation exert protective effects by increasing the glucose uptake, or did the uridine-induced increases in glycogen formation, in the face of increase in glucose uptake by other mechanisms, buffer ATP levels and thereby maintain the normal, relative amounts of different UDP-hexosamines that are required for glycosylation reactions that produce other protective effects? Similarly, one can't look at an article on GFAT overexpression in mice, see a lot of adverse effects, and conclude that GLN is going to produce the same effects as GFAT overexpression will (for the reasons I discussed above, involving energy metabolism, primarily). But another important point is that, in some of those articles using high doses of GLN, one has to consider the cumulative, potentially-depleting effects of GLN-induced hexosamine and UDP-hexosamine formation on the intracellular and even plasma inorganic phosphate pools. I have at least one article showing that exogenous uridine can deplete the inorganic phosphate pool, and I'll try to put it up. Another thing to consider would be the use of uridine, glutamine, and inorganic phosphate in some sort of combination approach. The uridine could suppress de novo pyrimidine biosynthesis and avoid some of the undesirable effects of a high rate of de novo pyrimidine (uridine) formation (as discussed in past postings, orotate has tended to lead to ATP depletion in animal experiments) and also prevent the sequestration of uridine, some of which is obviously required for glycogen formation, in UDP-hexosamines. But the goal, in my opinion, should really be to normalize the availability of GLN to the brain or skeletal muscles, in order to prevent unnecessary exercise-induced ATP depletion, etc. There can be a significant increase in ATP turnover in the brain and, obviously, skeletal muscles during high-intensity exercise. That's separate, to a large extent, from the issue of ischemia.
Another important point is that it's necessary to take into account the potential for GLN-mediated decreases in glutamine synthetase (GS) activity, with regard to the supposed ATP-sparing effects of that suppression, to occur and to consider the effects of GLN-derived 2-oxoglutarate on mitochondrial ATP formation. One can say that the effects of GLN are mediated by glycosylation during ischemia, but how was the uridine pool preserved during ischemia? The GLN-mediated preservation of ATP could indirectly preserve the UDP-N-acetylglucosamine and overall UDP-hexosamine pools during ischemia, given that ATP depletion tends to lead to loss of pyrimidine nucleosides, either by export or degradation. That's only one example. Not surprisingly, there's actually some research showing that some of the protective effects of uridine in cultured astrocytes, or something like that, are mediated by glycosylation of various proteins, and I remember downloading a paper that shows that glycosaminoglycan formation is more sensitive to increases in uridine availability than other UDP-sugar-dependent or UDP-hexosamine-dependent glycosylation reactions are. Again, however, one has to consider the increases in glucose uptake that exogenous uridine can produce. Did the uridine-induced increases in protein glycosylation exert protective effects by increasing the glucose uptake, or did the uridine-induced increases in glycogen formation, in the face of increase in glucose uptake by other mechanisms, buffer ATP levels and thereby maintain the normal, relative amounts of different UDP-hexosamines that are required for glycosylation reactions that produce other protective effects? Similarly, one can't look at an article on GFAT overexpression in mice, see a lot of adverse effects, and conclude that GLN is going to produce the same effects as GFAT overexpression will (for the reasons I discussed above, involving energy metabolism, primarily). But another important point is that, in some of those articles using high doses of GLN, one has to consider the cumulative, potentially-depleting effects of GLN-induced hexosamine and UDP-hexosamine formation on the intracellular and even plasma inorganic phosphate pools. I have at least one article showing that exogenous uridine can deplete the inorganic phosphate pool, and I'll try to put it up. Another thing to consider would be the use of uridine, glutamine, and inorganic phosphate in some sort of combination approach. The uridine could suppress de novo pyrimidine biosynthesis and avoid some of the undesirable effects of a high rate of de novo pyrimidine (uridine) formation (as discussed in past postings, orotate has tended to lead to ATP depletion in animal experiments) and also prevent the sequestration of uridine, some of which is obviously required for glycogen formation, in UDP-hexosamines. But the goal, in my opinion, should really be to normalize the availability of GLN to the brain or skeletal muscles, in order to prevent unnecessary exercise-induced ATP depletion, etc. There can be a significant increase in ATP turnover in the brain and, obviously, skeletal muscles during high-intensity exercise. That's separate, to a large extent, from the issue of ischemia.
Sunday, September 27, 2009
Gone Menthol
This thing (http://www.usatoday.com/news/health/2009-09-27-menthol-cigarettes_N.htm) is about the banning of menthol cigarettes by the FDA. I'm not even certain about what a menthol cigarette is, but I think it's a flavored cigarette. I can't read through the insane quotes in that article. The author of the article quoted people who had made all of these bizarre statements about having "gone menthol" or something. "@#%#, yes, I went menthol." I don't think banning things like that is a good idea. I obviously think smoking is really bad, and I'm not a smoker. But I dunno. The last time they tried to ban high-nicotine cigarettes or something like that, smokers just ended up smoking more cigarettes to compensate. How does taxing smokers into bankruptcy accomplish anything? This is not a very interesting topic, but I should say that, in fact, I don't like the FDA. It's a cliched thing to say, but the FDA has become so cautious about making statements that, in my opinion, a mild or conservatively-worded warning from the FDA is likely to be, "not infrequently," a sign that there's a serious or significant problem. But then, when it comes to things like cigarettes, the FDA isn't afraid to make "statements." What exactly does the FDA do these days. I can't figure out what it is. I can't remember the last time they've done anything decent, but that's just my opinion.
Saturday, September 26, 2009
"Good for Postischemic Damage Control!"
I forget to mention it, but one reason that all of the research on the protection, by GLN and other supposed energy substrates, of the energy charge or adenylate charge during ischemia is "relevant" is that even high-intensity exercise, for example, can cause transient, very mild cerebral ischemia and is well-known to cause metabolic stress in the GI tract that essentially amounts to a low-level ischemia-reperfusion injury. The blood flow to the muscles can increase at the expense of the blood flow to other tissues, etc. But the point is that ischemia can be viewed as being an "everyday event." However, I recently saw some "sports supplement" that contained some questionable messages on the label, such as a "breezy" or even "happy"-looking blurb that read, "Good for preventing ATP depletion after cerebral ischemia!" There was a superscripted star by the "a" of ischemia, and the star referred the reader to the qualifying statement that read "(during your friendly-neighborhood exercise)". So at least there was that. But I still thought that that was in bad taste, just to a small extent. It's fine to face the facts, but...no, there was no such supplement.
In Conclusion
In conclusion, the fact that I've learned that (http://hardcorephysiologyfun.blogspot.com/2009/09/downregulation-of-glutamine-gln.html) is the reason I am, of course, never dismissive of articles, published in the popular press, on influenza, for example.
Downregulation of Glutamine (GLN) Synthetase Activity in Response to GLN: Relevance to Research on GLN as an Energy Substrate and "ATP-Sparing Agent"
In this article [Mignon et al., 2007: (http://www.ncbi.nlm.nih.gov/pubmed/17947599?dopt=Abstract)], Mignon et al. (2007) found that glutamine (GLN) supplementation only produced statistically-significant reductions in the activity of glutamine synthetase (GS), in the skeletal muscles, in the fed state in aged rats and in the fasted state in adult rats. The GLN-induced decreases in GS activity in the other "states" (fasted state in aged rats and fed state in adult rats) were not statistically-significant. It's interesting that the tissue concentrations, which are going to be mainly intracellular, of GLN and glutamate and plasma concentrations of GLN and glutamate did not increase in response to supplementation. Those findings, when viewed in alongside the reductions in GS activity, are consistent with my sense of the way GLN supplementation is likely to exert its supposed therapeutic effects [see here for my bare-bones paper on GLN: (http://hardcorephysiologyfun.blogspot.com/2009/08/some-more-old-papers-of-mine.html)], as discussed below. Mignon et al. (2007) cited research that had shown that hypermetabolic, or "catabolic" states, such as can occur after surgeries or other causes of physiological stress, have generally been associated with an upregulation of GS activity, and researchers have typically attributed those increases in GS activity to glucocorticoid-mediated increases in the mRNA expression of GS or to other factors, etc.
That research by Mignon et al. (2007) is relevant to the use of GLN as an energy substrate, in general, and to its use as an "adjunctive" energy substrate in the treatment of depression, etc. There's only one article on the use of GLN as an adjunctive antidepressant [Cocchi, 1976: (http://hardcorephysiologyfun.blogspot.com/2009/03/gabaergic-effect-of-l-glutamine-in-rats.html)], and its efficacy has obviously not been proven and will never be proven. But that article by Cocchi (1976) is remarkable in the sense that the author's observations are generally consistent with the kinds of effects that one would expect to see, based on all the research that has been done, in response to GLN. The author also noted that the therapeutic window was relatively narrow, and, in my experience, it's extremely narrow and changes in response to changes in exercise intensity and to changes in factors that affect serum calcium (such as vitamin D). All I can do is relate my sense of things, and I don't have a good explanation for the reason the range of therapeutic dosages would be so small. I mean that tiny increases in the dosage can either produce beneficial effects, in terms of the effects that one would ideally expect from an energy substrate, under some conditions, or can cause effects that seem to be consistent with the GABAergic effects that Wang et al. (2007) described [see that past posting for my discussion of this: Wang et al., 2007: (http://www.fasebj.org/cgi/reprint/21/4/1227)(http://www.ncbi.nlm.nih.gov/pubmed/17218538?dopt=Abstract)].
The finding that exogenous GLN can decrease GS activity without increasing the steady-state intracellular GLN concentrations in skeletal muscle myocytes (and satellite cells, etc.) is significant in relation to an understanding of GLN metabolism in general, and the finding can be explained by the fact that exogenous GLN can increase the 26S-proteasomal degradation of the GS enzyme protein [Labow et al., 2001, cited here: (http://hardcorephysiologyfun.blogspot.com/2009/08/some-more-old-papers-of-mine.html)]. That's really important, but there's some sort of resistance to the fact that GLN is likely, as it is, in my opinion, to exert many of its effects by virtue of its capacity to serve as an energy substrate. There are many articles that have shown this, and I'm not going to collect all of them right now [the protection by GLN against damage due to ischemia is basically a result of its capacity to be converted into 2-oxoglutarate and undergo oxidation in the TCA cycle, and here are some of those articles showing protection against ischemic damage: (http://scholar.google.com/scholar?q=glutamine+ischemia&hl=en)]. There's at least one article showing that it improves cardiac function acutely, in humans with heart failure or heart disease [here it is: Khogali et al., 2002: (http://www.ncbi.nlm.nih.gov/pubmed/11844641)].
The key point, however, is that GS activity consumes enormous amounts of ATP, and very few tissues in the body are characterized by a net formation of GLN. There are all of these articles discussing the fact that the GLN-glutamate-GABA cycle accounts for 70-80 percent of the ATP consumption in the brain, and a lot of articles emphasize the fact that astrocyte-derived GLN is utilized as a major energy substrate for neurons. But the downregulation of GS activity by exogenous GLN is likely to not be accompanied by major increases in either the steady-state extracellular or intracellular GLN or glutamate concentrations, and, following a brain injury, there might not even be any post-infusion, detectable increase in the extracellular-fluid GLN concentrations in the brain [the CNS "parenchymal" interstitial fluid (ISF) concentrations]. This phenomenon has been shown in the liver and in cultured cells, also [see Yudkoff et al., 1988, and Qu et al., 2001, cited here: (http://hardcorephysiologyfun.blogspot.com/2009/05/problems-with-glutamine-research.html)], and I've cited all the research in past postings. The turnover is so rapid and so massive that an infusion of even multi-gram amounts, in the context of the 23 to 60-fold increases in the rate of oxidation of GLN carbons in the TCA cycle that occur in the brain, following ischemia [see here: (http://hardcorephysiologyfun.blogspot.com/2009/05/oxidation-of-glutamate-derived-2.html); Pascual et al., 1998: (http://stroke.ahajournals.org/cgi/content/full/strokeaha;29/5/1048)(http://www.ncbi.nlm.nih.gov/pubmed/9596256)], could easily fail to elevate ISF GLN in the brains of people who have traumatic brain injuries. But the downregulation of GS activity by GLN could, nonetheless, spare significant amounts of ATP, and, of course, ATP depletion is going to occur sooner or later after a brain injury. One can sometimes show no ATP depletion for a little while after an injury, but that's probably because structural damage to the mitochondria takes a couple of days to occur. Another reason that the GLN-mediated decreases in ATP consumption by GS activity would be desirable, in my opinion, is that glutaminase can, especially under those conditions in which the oxidation of GLN carbons is drastically augmented (i.e. after a brain injury or even, arguably, under more mild conditions of deranged energy metabolism), escape feedback inhibition by intramitochondrial glutamate. Essentially, glutamate formed by the glutaminase-mediated deamidation of GLN (in the mitochondria) is likely to be oxidized or otherwise utilized with exceptional rapidity, and that means that the pool of glutamate that is available to exert feedback inhibition of glutaminase activity [see here for discussion: (http://hardcorephysiologyfun.blogspot.com/2009/05/oxidation-of-glutamate-derived-2.html); Brand and Chappell, 1974: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1167992&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/4375961)] is going to be even more limited than it usually is. That change in the normal allosteric regulation of glutaminase could create an ATP-consuming futile cycle, for all practical purposes, in tissues following ischemia, and GLN could be one approach to breaking that futile cycle. Anyway, the point is that GLN could reduce ATP consumption in skeletal muscles ("spare" ATP) or in the brain [it does cross the blood-brain and blood-CSF barriers, and that's apparent and is discussed in articles cited here: (http://hardcorephysiologyfun.blogspot.com/2009/08/some-more-old-papers-of-mine.html)] without necessarily producing drastic or even any changes in the tissue or plasma or ISF GLN concentrations, particularly following ischemia or hypoxia or other physiological stressors that can, as found by Pascual et al. (1998), cited above, increase the percentage (and rate) of the intracellular GLN-derived glutamate pool that is oxidized, upon its metabolism into 2-oxoglutarate, in the TCA cycle. The rates of GLN synthesis, by ATP-consuming GS, and degradation are very high in many tissues, and that's one reason that so few cell groups display an overall, net output of GLN. At very high or otherwise excessive GLN intakes, the adverse effects of the extra ammonia could conceivably outweigh the benefits associated with the supposed ATP-sparing effects. GLN could also interfere with the transport of citrulline or other amino acids or intermediates, as discussed in past postings.
Incidentally, other researchers [Young et al., 1993: (http://www.ncbi.nlm.nih.gov/pubmed/8289407); Morlion et al., 1998: (http://www.pubmedcentral.nih.gov.floyd.lib.umn.edu/picrender.fcgi?artid=1191250&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/9488531)] have reported that people who had been treated with intravenous L-alanyl-L-glutamine (the stable dipeptide "form" of glutamine that can be stored in i.v. solutions in the long term) had noted improvements in "mood" or "well being." It's easy to dismiss things like that, but it's possible to easily dismiss things to the detriment of...oneself. "It's not necessarily *good* to be dismissive of *things*." That's the end of this posting.
That research by Mignon et al. (2007) is relevant to the use of GLN as an energy substrate, in general, and to its use as an "adjunctive" energy substrate in the treatment of depression, etc. There's only one article on the use of GLN as an adjunctive antidepressant [Cocchi, 1976: (http://hardcorephysiologyfun.blogspot.com/2009/03/gabaergic-effect-of-l-glutamine-in-rats.html)], and its efficacy has obviously not been proven and will never be proven. But that article by Cocchi (1976) is remarkable in the sense that the author's observations are generally consistent with the kinds of effects that one would expect to see, based on all the research that has been done, in response to GLN. The author also noted that the therapeutic window was relatively narrow, and, in my experience, it's extremely narrow and changes in response to changes in exercise intensity and to changes in factors that affect serum calcium (such as vitamin D). All I can do is relate my sense of things, and I don't have a good explanation for the reason the range of therapeutic dosages would be so small. I mean that tiny increases in the dosage can either produce beneficial effects, in terms of the effects that one would ideally expect from an energy substrate, under some conditions, or can cause effects that seem to be consistent with the GABAergic effects that Wang et al. (2007) described [see that past posting for my discussion of this: Wang et al., 2007: (http://www.fasebj.org/cgi/reprint/21/4/1227)(http://www.ncbi.nlm.nih.gov/pubmed/17218538?dopt=Abstract)].
The finding that exogenous GLN can decrease GS activity without increasing the steady-state intracellular GLN concentrations in skeletal muscle myocytes (and satellite cells, etc.) is significant in relation to an understanding of GLN metabolism in general, and the finding can be explained by the fact that exogenous GLN can increase the 26S-proteasomal degradation of the GS enzyme protein [Labow et al., 2001, cited here: (http://hardcorephysiologyfun.blogspot.com/2009/08/some-more-old-papers-of-mine.html)]. That's really important, but there's some sort of resistance to the fact that GLN is likely, as it is, in my opinion, to exert many of its effects by virtue of its capacity to serve as an energy substrate. There are many articles that have shown this, and I'm not going to collect all of them right now [the protection by GLN against damage due to ischemia is basically a result of its capacity to be converted into 2-oxoglutarate and undergo oxidation in the TCA cycle, and here are some of those articles showing protection against ischemic damage: (http://scholar.google.com/scholar?q=glutamine+ischemia&hl=en)]. There's at least one article showing that it improves cardiac function acutely, in humans with heart failure or heart disease [here it is: Khogali et al., 2002: (http://www.ncbi.nlm.nih.gov/pubmed/11844641)].
The key point, however, is that GS activity consumes enormous amounts of ATP, and very few tissues in the body are characterized by a net formation of GLN. There are all of these articles discussing the fact that the GLN-glutamate-GABA cycle accounts for 70-80 percent of the ATP consumption in the brain, and a lot of articles emphasize the fact that astrocyte-derived GLN is utilized as a major energy substrate for neurons. But the downregulation of GS activity by exogenous GLN is likely to not be accompanied by major increases in either the steady-state extracellular or intracellular GLN or glutamate concentrations, and, following a brain injury, there might not even be any post-infusion, detectable increase in the extracellular-fluid GLN concentrations in the brain [the CNS "parenchymal" interstitial fluid (ISF) concentrations]. This phenomenon has been shown in the liver and in cultured cells, also [see Yudkoff et al., 1988, and Qu et al., 2001, cited here: (http://hardcorephysiologyfun.blogspot.com/2009/05/problems-with-glutamine-research.html)], and I've cited all the research in past postings. The turnover is so rapid and so massive that an infusion of even multi-gram amounts, in the context of the 23 to 60-fold increases in the rate of oxidation of GLN carbons in the TCA cycle that occur in the brain, following ischemia [see here: (http://hardcorephysiologyfun.blogspot.com/2009/05/oxidation-of-glutamate-derived-2.html); Pascual et al., 1998: (http://stroke.ahajournals.org/cgi/content/full/strokeaha;29/5/1048)(http://www.ncbi.nlm.nih.gov/pubmed/9596256)], could easily fail to elevate ISF GLN in the brains of people who have traumatic brain injuries. But the downregulation of GS activity by GLN could, nonetheless, spare significant amounts of ATP, and, of course, ATP depletion is going to occur sooner or later after a brain injury. One can sometimes show no ATP depletion for a little while after an injury, but that's probably because structural damage to the mitochondria takes a couple of days to occur. Another reason that the GLN-mediated decreases in ATP consumption by GS activity would be desirable, in my opinion, is that glutaminase can, especially under those conditions in which the oxidation of GLN carbons is drastically augmented (i.e. after a brain injury or even, arguably, under more mild conditions of deranged energy metabolism), escape feedback inhibition by intramitochondrial glutamate. Essentially, glutamate formed by the glutaminase-mediated deamidation of GLN (in the mitochondria) is likely to be oxidized or otherwise utilized with exceptional rapidity, and that means that the pool of glutamate that is available to exert feedback inhibition of glutaminase activity [see here for discussion: (http://hardcorephysiologyfun.blogspot.com/2009/05/oxidation-of-glutamate-derived-2.html); Brand and Chappell, 1974: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1167992&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/4375961)] is going to be even more limited than it usually is. That change in the normal allosteric regulation of glutaminase could create an ATP-consuming futile cycle, for all practical purposes, in tissues following ischemia, and GLN could be one approach to breaking that futile cycle. Anyway, the point is that GLN could reduce ATP consumption in skeletal muscles ("spare" ATP) or in the brain [it does cross the blood-brain and blood-CSF barriers, and that's apparent and is discussed in articles cited here: (http://hardcorephysiologyfun.blogspot.com/2009/08/some-more-old-papers-of-mine.html)] without necessarily producing drastic or even any changes in the tissue or plasma or ISF GLN concentrations, particularly following ischemia or hypoxia or other physiological stressors that can, as found by Pascual et al. (1998), cited above, increase the percentage (and rate) of the intracellular GLN-derived glutamate pool that is oxidized, upon its metabolism into 2-oxoglutarate, in the TCA cycle. The rates of GLN synthesis, by ATP-consuming GS, and degradation are very high in many tissues, and that's one reason that so few cell groups display an overall, net output of GLN. At very high or otherwise excessive GLN intakes, the adverse effects of the extra ammonia could conceivably outweigh the benefits associated with the supposed ATP-sparing effects. GLN could also interfere with the transport of citrulline or other amino acids or intermediates, as discussed in past postings.
Incidentally, other researchers [Young et al., 1993: (http://www.ncbi.nlm.nih.gov/pubmed/8289407); Morlion et al., 1998: (http://www.pubmedcentral.nih.gov.floyd.lib.umn.edu/picrender.fcgi?artid=1191250&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/9488531)] have reported that people who had been treated with intravenous L-alanyl-L-glutamine (the stable dipeptide "form" of glutamine that can be stored in i.v. solutions in the long term) had noted improvements in "mood" or "well being." It's easy to dismiss things like that, but it's possible to easily dismiss things to the detriment of...oneself. "It's not necessarily *good* to be dismissive of *things*." That's the end of this posting.
Friday, September 25, 2009
Hamburger Helper
Here's that olllll' ditty:
http://www.youtube.com/watch?v=c87VzSOdI04
http://www.youtube.com/watch?v=vx3IFMo0Q5M
But don't touch the hamburger helper, for crying out loud(http://hardcorephysiologyfun.blogspot.com/2009/09/rambler-on-magnesium-and-sad-ironies.html). Don't eat it. Use the hamburger helper in a live organism? Huh? Who would want to do that? Are you from a different planet?
http://www.youtube.com/watch?v=c87VzSOdI04
http://www.youtube.com/watch?v=vx3IFMo0Q5M
But don't touch the hamburger helper, for crying out loud(http://hardcorephysiologyfun.blogspot.com/2009/09/rambler-on-magnesium-and-sad-ironies.html). Don't eat it. Use the hamburger helper in a live organism? Huh? Who would want to do that? Are you from a different planet?
Rambler on Magnesium and Sad Ironies
In this article [Vink et al., 1988: (http://www.jbc.org/cgi/reprint/263/2/757.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/3335524)] found that the depletion of intracellular free magnesium (Mg2+) correlated positively with the magnitude of the damage that was produced by experimental brain injuries in rats, and the administration of intravenous Mg2+, 5 minutes before the injuries, prevented much of the damage. The authors cited some interesting data on the pH dependence of the calculations, based on 31P-MRS data, of the intracellular free Mg2+ values, and the authors used data on the dissociation constant of MgATP(2-) at pH 7.2 (50 uM). It's interesting that the mean pre-injury, intracellular free Mg2+ concentration was 1.01 mM (1,010 uM), and the mean concentration was 0.26 mM (260 uM) by 3 hours post-injury. Vink et al. (1988) also cited research (reference 13, cited on p. 761) that had shown the rate of DNA synthesis in cultured fibroblasts to decrease logarithmically at intracellular free Mg2+ concentrations below 0.24 mM (240 uM). In that article, the rate of protein synthesis was down to almost nothing at those low concentrations, also. Resnick et al. (1997) [Resnick et al., 1997: (http://hyper.ahajournals.org/cgi/content/full/30/3/654)(http://www.ncbi.nlm.nih.gov/pubmed/9322999?dopt=Abstract)] found that the intracellular free Mg2+ levels were inversely correlated with the ages of people, and that means the levels decrease as people get older. It's interesting that the mean concentration of intracellular free Mg2+ in people who were hypertensive was 0.284 mM (284 uM), and the mean concentration in normotensive controls was only 0.383 mM (383 uM). One could argue that the rate of DNA synthesis in mitotic cells (fibroblasts) is going to be much higher at specific points in the cell cycle, but then why are the intracellular free Mg2+ levels in the brains of normal rats 3-4 times the levels in the brains of humans? I'll bet one reason is that laboratory animals generally receive higher intakes of magnesium, in addition to phosphate, etc. One could make the argument that the higher zinc or copper intakes of animals eating "rat chow," or whatever, would cause some neurotoxicity and balance out the benefits that have sometimes been associated with higher Mg2+ intakes. But the discrepancies between rat and human diets tend to not be as significant for some of those metals, like copper and zinc. That doesn't sound like a very good situation, with intracellular free Mg2+ concentrations being that low. This basic search on magnesium in relation to neuroprotection or neurotoxicity yielded 45,000+ results (http://scholar.google.com/scholar?hl=en&q=magnesium+neuroprotective+OR+neurotoxic+OR+ischemia+OR+neurodegenerative). In that search, there's one of the articles (Harkema et al., 1992) in which researchers have discussed the use of parenteral MgATP(2-) to protect against different kinds of trauma. If only someone could market a simple acylated prodrug of ATP (or the adenosine prodrug along with dibasic orthophosphate in a 1:3 ratio or something) that would release adenosine slowly enough not to produce hypotension but quickly enough to outperform the effects of oral ATP. They could have done it back in the 1960's, when researchers were obtaining the first use patents for acylated nucleosides. Think of the effects that type of simple approach (or a prodrug of inosine, etc.) could have had in clinical neuroscience, even in the years since 1992. It's moving up on 20 years since 1992. It's sadly ironic, in my mind, that it's ATP and nucleotides that have all of these effects, that have been researched heavily, and that aren't being utilized and probably won't be for a long time, in spite of the thousands of articles on all of these things. But the irony is that researchers have been dumping nucleotide triphosphates into PCR machines all over the place and have been using them in experiments like candy or "hamburger helper." In any case, no one would think magnesium could be a standalone treatment for strokes or anything, and there are all of those details, as discussed in past postings, related to the fact that magnesium supplementation tends to decrease serum phosphate levels, sometimes very significantly. But those issues are not all that difficult to address, as long as one is aware of the potentially-large magnitude of the interaction, etc.
Wednesday, September 23, 2009
Relationships of Intracellular Free Magnesium to the Cytosolic Phosphorylation Potential and Rate of Mitochondrial ATP Synthesis
Jacobsen et al. (2001) [Jacobsen et al., 2001: (http://www.ncbi.nlm.nih.gov/pubmed/11431727)] found that the intracellular free magnesium (meaning Mg2+, abbreviated Mg, that was not bound to proteins or complexed with nucleotides) concentrations, in the skeletal muscles of people who exhibited cirrhosis, correlated positively with the maximal rates of ATP formation that the authors measured, using 31P-MRS, after the people had just finished exercising. The authors estimated the intracellular free Mg levels by taking into account the intracellular pH and looking at the difference between the chemical shift of alpha-ATP, or alpha-NTP (nucleotide triphosphates, which are assumed to consist primarily of ATP), and the shift of beta-ATP/beta-NTP's. Heath and Vink (1999) [Heath and Vink, 1999: (http://jpet.aspetjournals.org/cgi/reprint/288/3/1311)(http://www.ncbi.nlm.nih.gov/pubmed/10027872)] found that intravenous Mg increased and thereby normalized the cytosolic phosphorylation potential (CPP) in rats that had been given experimental brain injuries, and the intracellular free Mg concentration and CPP values both correlated positively with markers that were indicative of favorable neurological outcomes. The CPP = [sigma sum of ATP anions]/ [sigma ADP] [sigma Pi], and the calculation of the ADP species requires one to take into account the intracellular pH and free Mg levels. The sum of the ATP species includes MgATP(2-) and ATP(4-), and [sigma ADP] includes the concentrations of MgADP(-) and ADP(3-) but also takes into account the influence of the Mg availability on the overall, intracellular creatine kinase equilibrium that is a reflection of the mitochondrial and cytosolic equilibria. I'd like to know the assumptions that the authors made about the relative abundances of MgATP(2-) and ATP(4-). It's not clear to me that the authors are using the intracellular Mg concentration as a basis for estimating the relative amounts of MgATP(2-) and MgADP(-), in relation to the free nucleotides. I've seen some authors assume that most or all of the ATP exists as MgATP(2-), and this is unlikely to be the case in the cells of most humans, in my opinion. I get the impression that Heath and Vink (1999) only took into account the shift that an increase in Mg availability produces in the overall creatine kinase equilibrium. Mg tends to shift the eqilibrium constant to increase the phosphocreatine/creatine ratio at equilibrium. But the Mg-induced increase in the CPP could have partially resulted from increases in the proportions of MgATP(2-) and MgADP(-) (meaning that more total ADP would be available and would allow for more total ATP) and not just from an effect of Mg on ADP, etc. I've seen other authors argue that the increases in the CPP that occur in association with increases in free Mg are not desirable in the context of presumably- or definitively-chronic mitochondrial dysfunction in the brain, as in people who have cluster headaches mitochondrial disorders resulting from mutations in the nuclear or mitochondrial genomes. The argument by some of those authors has been that an increase in the CPP may be associated with an increase in oxidative stress, given that a higher CPP is indicative of a high rate of ATP turnover. Heath and Vink (1999) found that, in the period shortly after a traumatic brain injury, the ATP levels were not decreased. That might be one reason for the fairly clear benefit of Mg. Although there is the potential for Mg to cause some strange effects that are not always going to be beneficial, a large amount of research has shown that Mg depletion is harmful to the brain and that Mg repletion tends to be beneficial, in my opinion. The Mg-induced, transient decreases in blood pressure or Mg-induced decreases in the peripheral vascular resistance could be less-than-beneficial after brain injuries, in some cases. Mg-induced peripheral vasodilation could reduce venous return by decreasing the sympathetic outflow from the CNS, and a decrease in venous return to the heart could tend to reduce the cardiac output and thereby reduce cerebral blood flow in some patients. In some people who have had brain injuries, the regional cerebral blood flow can be dependent upon and positively correlated, up to a point, with the cardiac output or mean arterial pressure ("pressure-passive" autoregulation of cerebral blood flow, etc.). It's possible that that type of dependence could show up, to a lesser degree, in some people who have psychiatric disorders, in my opinion, or in chronic fatigue syndrome that is accompanied by orthostatic tachycardia or hypotension (orthostatic tachycardia usually indicates that the sympathetic activity is decreased, in my view). In those cases, Mg could help up to some individualized point or dosage but could then become counterproductive as one kept increasing the dosage, because of reductions in venous return or mean arterial pressure or because of other mechanisms, such as Mg-induced, excessive increases in cytosolic 5'-nucleotidase activities, etc. But then, in the longer term, one might expect Mg repletion to reduce that kind of abberant regulation of cerebral blood flow. Pressure-passive autoregulation can result from vasospasm, in which there's localized vasoconstriction that persists in the face of the increases in shear stress that would normally produce vasodilation, etc. The smooth muscle cells of cerebral arteries are exceptionally sensitive to changes in calcium influx, and that's one reason Mg, as a mild calcium channel antagonist, is thought to produce prominent cerebral vasodilatory effects. The antithrombotic effects that Mg can exert could also gradually cause the regulation of regional cerebral blood flow to become less dependent on changes in the mean arterial pressure or cardiac output. But some of the "sympatholytic" effects of excessive amounts of Mg could become counterproductive in ways that might not be overcome by antithrombotic or cerebral vasodilatory effects of Mg. I get the feeling that a lot of people find it disturbing to think that some cases of severe depression or chronic fatigue syndrome are partially a result of reductions in regional cerebral blood flow (and that increasing cerebral blood flow might ameliorate those symptoms), but, in my opinion, it's very likely to be the case. Look at the association of migraine with depression or whatever else. The authors of one of those MRS studies of the effects of SAM-e, with its measly effects on the intracellular adenosine nucleotide pools in endothelial cells and neurons and on the perivascular interstitial fluid adenosine levels, found some evidence that SAM-e might have increased cerebral blood flow in a subset of people with depression. Obviously, ATP disodium could reasonably be expected to increase the regional cerebral blood flow in parts of the brain in which the cerebral blood flow is reduced. But that's just my opinion. But there has to be some mechanism to account for the magnitude of the effects of something like that, and, in my opinion, whatever effects may occur are either going to be a result of AMP- and ADP-stimulated respiration or glycolytic activity (and, consequently, glucose uptake) or of increases in cerebral blood flow or both or similar "secondary" effects on energy metabolism. In other words, any increases in ATP levels or in the rates of ATP turnover would probably not be only a result of increases in the overall pools of adenosine nucleotides per se, independent of the secondary changes in glucose consumption or uptake or of oxygen uptake, etc. I say that because the effects of adenosine really can't be accounted for by the capacity of its ribose moiety to serve as a precursor of glycolytic intermediates, as shown by some of the research I've cited in past postings.
Sunday, September 20, 2009
Calcium and Magnesium in Anxiety and Depression: Potential Relevance of Changes in Intracellular Calcium (Ca) in Neurons or Endothelial Cells
I can't get the full text of this article now [Jung et al., 2009: (http://www.springerlink.com/content/73r65221u5g14456/)], but the authors found that markers of depression and excessive stress (results on the usual tests or questionnaires, etc.) were higher in people whose serum calcium/magnesium (Ca/Mg) ratios were in the highest of three ranges of values. (The authors grouped the various serum Ca and Mg values into three ranges, or tertiles, and a higher score on one or more of the tests was taken as being an indication that the people had been experiencing more depression or anxiety, at the time they'd taken the tests.) Low serum Mg levels were also associated with more depression or anxiety.
That article looks interesting and is likely to have some validity to it, and one could interpret the results in a number of different contexts. I think elevations in serum Ca could produce depression by producing ATP depletion in neurons and astrocytes in parts of the brain or in cerebral vascular endothelial cells, and that ATP depletion could result from an excessive degree of activation of the coagulation cascade or from an excessive rate of calcium influx into neurons. An important effect of Mg (Mg2+) is to act as a mild Ca channel antagonist, either by acting extracellularly or intracellularly (). One thing that researchers hardly ever discuss in the literature is that the rate of Ca influx into cells does, in fact, tend to increase in response to increases in extracellular Ca [one example: Hennings et al., 1989: (http://www.ncbi.nlm.nih.gov/pubmed/2702726)]. That's just really important, and I can't emphasize enough the importance of that phenomenon. The way the authors of most articles describe the regulation of Ca influx, one would think Ca influx is so "strictly regulated" as to be more "inviolable" than "Fort Knox." The intracellular Ca concentrations are something like one 10,000th of the extracellular Ca concentration, normally, and the intracellular and intramitochondrial Ca concentrations are highly regulated. But they can, nonetheless, be increased or decreased to a meaningful extent, in my opinion, in response to increases or decreases in extracellular Ca. And Mg tends to block Ca channels to some extent, but it doesn't just behave like a pharmacological Ca channel antagonist. But the point is that an excessive rate of Ca influx into platelets can augment their thrombogenic effects, and excessive Ca influx into neurons, in the long term, tends to decrease dopaminergic transmission and worsen cognitive functioning and oppose all of the effects that occur in response to either an acute increase in Ca influx or that occur under conditions of tonic or phasic dopamine release, etc. Chronic stress can lead to excessive glutamatergic stimulation of noradrenergic neurons, in the locus ceruleus and other adrenergic cell groups, and dopaminergic neurons, such as in the ventral striatum, and thereby cause excessive Ca influx, and this tends to impair mitochondrial functioning and thereby cause ATP depletion [I shouldn't have to cite anything for this, given that it's so well-known, but here are some hastily-chosen articles that discuss that: Knochel, 2000: (http://www.ncbi.nlm.nih.gov/pubmed/10806294); Moghtader et al., 1997: (http://www.ncbi.nlm.nih.gov/pubmed/9434995)]. This can tend to decrease noradrenergic and dopaminergic transmission, and the mild NMDA-receptor antagonism of Mg or other weak NMDA-receptor antagonists can acutely and paradoxically sensitize dopaminergic neurons, for example, to D1 dopamine receptor activation [see here (http://hardcorephysiologyfun.blogspot.com/2009/03/adenosine-and-guanosine-in-animal.html) and, for example, Peeters et al., 2002: (http://www.ncbi.nlm.nih.gov/pubmed/12213297); Deep et al., 1999: (http://www.ncbi.nlm.nih.gov/pubmed/10529725); Arai et al., 2003: (http://www.ncbi.nlm.nih.gov/pubmed/12711097); Konradi et al., 1996: (http://www.jneurosci.org/cgi/reprint/16/13/4231)(http://www.ncbi.nlm.nih.gov/pubmed/8753884?dopt=Abstract); Tokuyama et al., 2001: (http://www.ncbi.nlm.nih.gov/pubmed/11408088); Boyce-Rustay et al., 2006: (http://www.ncbi.nlm.nih.gov/pubmed/16482087)].
The main point is that an excessive rate of Ca influx, in response to or in the presence of any of the countless stimuli that normally increase or regulate Ca influx, is just generally detrimental to all sorts of physiological processes and to energy metabolism in particular. Excessive Ca influx, in response to ischemia or other metabolic insults, activates Ca-dependent proteases that cause many more problems and worsen ATP depletion, ATP depletion impairs the capacity of cells and their mitochondria to buffer intracellular and intramitochondrial Ca concentrations, and so on (http://scholar.google.com/scholar?hl=en&q=calcium+dependent+protease+ischemia). Calcium influx is obviously essential, but the key point that is not obvious in the literature is that, in many or most cases, in my opinion, there is no shortage of Ca influx. And Ca influx and serum Ca are not in danger of being too low in most disease states. One would obviously want to discuss these things with one's doctor, however, and I'm just talking about these types of adjunctive approaches. Obviously, zinc and copper supplementation would be things to consider cutting out entirely, supposing one were interested in "addressing" a psychiatric condition, given the countless reports of neurotoxicity from excessive zinc supplementation and the known, endless problems associated with an excess of intracellular or extracellular copper and with copper supplementation in general. But these are just my opinions, and one's doctor is going to be the person to advise any given individual.
That article looks interesting and is likely to have some validity to it, and one could interpret the results in a number of different contexts. I think elevations in serum Ca could produce depression by producing ATP depletion in neurons and astrocytes in parts of the brain or in cerebral vascular endothelial cells, and that ATP depletion could result from an excessive degree of activation of the coagulation cascade or from an excessive rate of calcium influx into neurons. An important effect of Mg (Mg2+) is to act as a mild Ca channel antagonist, either by acting extracellularly or intracellularly (). One thing that researchers hardly ever discuss in the literature is that the rate of Ca influx into cells does, in fact, tend to increase in response to increases in extracellular Ca [one example: Hennings et al., 1989: (http://www.ncbi.nlm.nih.gov/pubmed/2702726)]. That's just really important, and I can't emphasize enough the importance of that phenomenon. The way the authors of most articles describe the regulation of Ca influx, one would think Ca influx is so "strictly regulated" as to be more "inviolable" than "Fort Knox." The intracellular Ca concentrations are something like one 10,000th of the extracellular Ca concentration, normally, and the intracellular and intramitochondrial Ca concentrations are highly regulated. But they can, nonetheless, be increased or decreased to a meaningful extent, in my opinion, in response to increases or decreases in extracellular Ca. And Mg tends to block Ca channels to some extent, but it doesn't just behave like a pharmacological Ca channel antagonist. But the point is that an excessive rate of Ca influx into platelets can augment their thrombogenic effects, and excessive Ca influx into neurons, in the long term, tends to decrease dopaminergic transmission and worsen cognitive functioning and oppose all of the effects that occur in response to either an acute increase in Ca influx or that occur under conditions of tonic or phasic dopamine release, etc. Chronic stress can lead to excessive glutamatergic stimulation of noradrenergic neurons, in the locus ceruleus and other adrenergic cell groups, and dopaminergic neurons, such as in the ventral striatum, and thereby cause excessive Ca influx, and this tends to impair mitochondrial functioning and thereby cause ATP depletion [I shouldn't have to cite anything for this, given that it's so well-known, but here are some hastily-chosen articles that discuss that: Knochel, 2000: (http://www.ncbi.nlm.nih.gov/pubmed/10806294); Moghtader et al., 1997: (http://www.ncbi.nlm.nih.gov/pubmed/9434995)]. This can tend to decrease noradrenergic and dopaminergic transmission, and the mild NMDA-receptor antagonism of Mg or other weak NMDA-receptor antagonists can acutely and paradoxically sensitize dopaminergic neurons, for example, to D1 dopamine receptor activation [see here (http://hardcorephysiologyfun.blogspot.com/2009/03/adenosine-and-guanosine-in-animal.html) and, for example, Peeters et al., 2002: (http://www.ncbi.nlm.nih.gov/pubmed/12213297); Deep et al., 1999: (http://www.ncbi.nlm.nih.gov/pubmed/10529725); Arai et al., 2003: (http://www.ncbi.nlm.nih.gov/pubmed/12711097); Konradi et al., 1996: (http://www.jneurosci.org/cgi/reprint/16/13/4231)(http://www.ncbi.nlm.nih.gov/pubmed/8753884?dopt=Abstract); Tokuyama et al., 2001: (http://www.ncbi.nlm.nih.gov/pubmed/11408088); Boyce-Rustay et al., 2006: (http://www.ncbi.nlm.nih.gov/pubmed/16482087)].
The main point is that an excessive rate of Ca influx, in response to or in the presence of any of the countless stimuli that normally increase or regulate Ca influx, is just generally detrimental to all sorts of physiological processes and to energy metabolism in particular. Excessive Ca influx, in response to ischemia or other metabolic insults, activates Ca-dependent proteases that cause many more problems and worsen ATP depletion, ATP depletion impairs the capacity of cells and their mitochondria to buffer intracellular and intramitochondrial Ca concentrations, and so on (http://scholar.google.com/scholar?hl=en&q=calcium+dependent+protease+ischemia). Calcium influx is obviously essential, but the key point that is not obvious in the literature is that, in many or most cases, in my opinion, there is no shortage of Ca influx. And Ca influx and serum Ca are not in danger of being too low in most disease states. One would obviously want to discuss these things with one's doctor, however, and I'm just talking about these types of adjunctive approaches. Obviously, zinc and copper supplementation would be things to consider cutting out entirely, supposing one were interested in "addressing" a psychiatric condition, given the countless reports of neurotoxicity from excessive zinc supplementation and the known, endless problems associated with an excess of intracellular or extracellular copper and with copper supplementation in general. But these are just my opinions, and one's doctor is going to be the person to advise any given individual.
Saturday, September 19, 2009
Heterogeneous Precipitation/Nucleation as a Mechanism Leading to "Chaos" in Magnesium and Phosphate Homeostasis
In their in vitro experiments at physiological pH values, Sheikh et al. (1989) [Shiekh et al., 1989: (http://www.pubmedcentral.nih.gov/picrender.fcgi?pmid=2910921&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/2910921)] found that calcium (Ca2+) acetate was more effective in binding phosphate (Pi) than calcium carbonate was, and the authors also found that magnesium (Mg2+) was less effective than calcium in binding phosphate in vitro. However, Spiegel et al. (2007) [Spiegel et al., 2007: (http://www.ncbi.nlm.nih.gov/pubmed/17971314)] cited research (reference 10, cited on p. 421) in which the authors had made the argument that Mg2+ is likely to bind more phosphate than Ca2+ in vivo, primarily because less Mg2+ is going to be absorbed than Ca2+. I think that argument is likely to be valid, and the main thing would be to try to separate the administration of supplemental Mg2+ from the administration of Pi by at least 2 hours [Heaney, 2004: (http://www.mayoclinicproceedings.com/content/79/1/91.full.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/14708952)]. But the significance of the in vitro comparison of Ca acetate and Ca carbonate is that those experiments (Shiekh et al., 1989) provide an indirect explanation of one mechanism by which so-called Pi binders, including Ca alpha-ketoglutarate, reduce serum Pi in vivo. The mechanism is the formation, in the intestinal lumen, of heterogeneous precipitates (a.k.a. epitaxial growth of precipitates, heterotopic crystallization, etc.) that are amorphous or crystalline in their structures and that are composed of one or more anionic species, including urate or oxalate or ketoglutarate or other dicarboxylic acids (or even unconjugated bilirubin or bile salts, etc.) and Ca or Mg or both [(http://scholar.google.com/scholar?hl=en&q=epitaxial+phosphate+calcium)]. I should mention that the sequestration of Pi in the intestinal tract can't explain the reductions in serum Pi that the parenteral administration of amino acids has sometimes produced (http://hardcorephysiologyfun.blogspot.com/2009/09/reductions-in-serum-phosphate-induced.html). The reason is that the amino acids were administered parenterally and not enterally (i.e. jejunally or duodenally or orally or whatever variation on that). In any case, these are some of the articles showing the serum Pi-lowering effect of Ca alpha-ketoglutarate [Birck et al., 1999: (http://ndt.oxfordjournals.org/cgi/reprint/14/6/1475.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/10383011); (http://scholar.google.com/scholar?hl=en&q=calcium+ketoglutarate+phosphate+binder)], and I don't think much of those calcium salts of organic anions as Pi binders or of Ca supplements in general. But it's interesting that bilirubin can form heterogeneous precipitates with Ca Pi, and "Ca Pi" supplementation decreased plasma bilirubin without altering the rates of urinary calcium or phosphate excretion [Van der Veere et al., 1997: (http://www.ncbi.nlm.nih.gov/pubmed/9024299)]. These are some other articles that show that effect (http://scholar.google.com/scholar?hl=en&q=calcium+phosphate+bilirubin).
Those mechanisms could mean that reasonable but not excessive intakes of Pi could serve to increase or "maintain" the excretion of bilirubin, but higher dosages could produce more of a nucleating effect and produce cholelithiasis (gallstones composed of mixed Ca and Mg precipitates of urate and phosphate, etc.). In the context of purine nucleotide supplementation, small changes in the ratios of phosphate, derived from nucleotide monophosphates or triphosphates, to exogenous-nucleotide-derived urate and xanthine could influence the formation of those types of precipitates. Adenosine that reaches the liver could increase Pi uptake by sequestering Pi in purine nucleotides and by increasing the activities of phosphofructokinase and other glycolytic enzymes, but an increase in biliary urate excretion (a significant amount can be excreted in the bile, rather than the urine), as a result, could increase the formation of heterogeneous precipitates with Ca Pi in the common bile duct and cause a biliary obstruction, etc. Those types of interactions would probably not be significant at most dosages, in my opinion, but it's potentially useful to be aware of that type of thing. That type of "extreme" scenario would be unlikely in anyone who is using reasonable dosages but could be more likely to occur in a person who is diabetic or insulin-resistant, for example.
It's interesting that the absorption of Mg, in particular, can also be drastically decreased by its binding to and sequestration by bile acids and unabsorbed fatty acids in people who display malabsorption due to liver disease, etc., and I've cited research on that in past postings. The dosages of Mg that researchers had to use to overcome that binding effect and just correct the deficiency state, in children who were undergoing treatment for liver disease, works out to a dose of 2380 mg/day of Mg for a 70-kg human [Heubi et al., 1997: (http://www.ncbi.nlm.nih.gov/pubmed/9285381); (http://hardcorephysiologyfun.blogspot.com/2009/01/articles-on-pantothenic-acid-vitamin-b5.html)]. That means that most of that 2,380 mg (or equivalent dosages in children) was not even available for absorption. And those children weren't even having to consider the binding of Mg by phosphate, etc. There's actually a scaling factor of about 2 that's sometimes used to convert children's dosages to adults' dosages, but I think that scaling factor is only applicable to children within a fairly narrow range of ages. But even supposing it's 1,190 mg of Mg that's being bound by endogenous bile salts and dietary fatty acids in an adult who has liver disease, it's relevant that as many as 20-30 percent of Americans display some degree of nonalcoholic fatty liver disease. A gram of phosphate can bind up to 1800 mg of Mg. That means the intake of a person who is not taking supplemental Mg and who is adding a gram of phosphate to his or her diet, through meat intake or some other route, could conceivably be receiving a daily Mg intake of "negative 2690 mg," assuming the person gets the usual, measly 300 mg/day from foods. In reality, the Pi wouldn't bind that much Mg in vivo, especially if the Mg were taken at a different time. But the point is that the magnitude of the Mg binding can be very large, and the nucleating effect of some of these endogenous, anionic compounds could create complex dose-response relationships for something like Pi. The formation of heterogeneous precipitates of bilirubin and calcium phosphate could also explain the apparent "phosphate-sparing" effect of calcium phosphate, even though calcium phosphate is more or less insoluble (see Heaney, 2004). (The calcium phosphate could remain insoluble and promote the nucleation of complexes of calcium and bilirubin, thereby reducing the amount of calcium that would be available to bind to dietary phosphate. That could increase the amount of phosphate that would be available for absorption. I don't quite understand the stoichiometries of the binding of soluble calcium with bilirubin and calcium phosphate or magnesium phosphate to form insoluble, heterogeneous precipitates, but it's likely that no one understands those issues.)
Those mechanisms could mean that reasonable but not excessive intakes of Pi could serve to increase or "maintain" the excretion of bilirubin, but higher dosages could produce more of a nucleating effect and produce cholelithiasis (gallstones composed of mixed Ca and Mg precipitates of urate and phosphate, etc.). In the context of purine nucleotide supplementation, small changes in the ratios of phosphate, derived from nucleotide monophosphates or triphosphates, to exogenous-nucleotide-derived urate and xanthine could influence the formation of those types of precipitates. Adenosine that reaches the liver could increase Pi uptake by sequestering Pi in purine nucleotides and by increasing the activities of phosphofructokinase and other glycolytic enzymes, but an increase in biliary urate excretion (a significant amount can be excreted in the bile, rather than the urine), as a result, could increase the formation of heterogeneous precipitates with Ca Pi in the common bile duct and cause a biliary obstruction, etc. Those types of interactions would probably not be significant at most dosages, in my opinion, but it's potentially useful to be aware of that type of thing. That type of "extreme" scenario would be unlikely in anyone who is using reasonable dosages but could be more likely to occur in a person who is diabetic or insulin-resistant, for example.
It's interesting that the absorption of Mg, in particular, can also be drastically decreased by its binding to and sequestration by bile acids and unabsorbed fatty acids in people who display malabsorption due to liver disease, etc., and I've cited research on that in past postings. The dosages of Mg that researchers had to use to overcome that binding effect and just correct the deficiency state, in children who were undergoing treatment for liver disease, works out to a dose of 2380 mg/day of Mg for a 70-kg human [Heubi et al., 1997: (http://www.ncbi.nlm.nih.gov/pubmed/9285381); (http://hardcorephysiologyfun.blogspot.com/2009/01/articles-on-pantothenic-acid-vitamin-b5.html)]. That means that most of that 2,380 mg (or equivalent dosages in children) was not even available for absorption. And those children weren't even having to consider the binding of Mg by phosphate, etc. There's actually a scaling factor of about 2 that's sometimes used to convert children's dosages to adults' dosages, but I think that scaling factor is only applicable to children within a fairly narrow range of ages. But even supposing it's 1,190 mg of Mg that's being bound by endogenous bile salts and dietary fatty acids in an adult who has liver disease, it's relevant that as many as 20-30 percent of Americans display some degree of nonalcoholic fatty liver disease. A gram of phosphate can bind up to 1800 mg of Mg. That means the intake of a person who is not taking supplemental Mg and who is adding a gram of phosphate to his or her diet, through meat intake or some other route, could conceivably be receiving a daily Mg intake of "negative 2690 mg," assuming the person gets the usual, measly 300 mg/day from foods. In reality, the Pi wouldn't bind that much Mg in vivo, especially if the Mg were taken at a different time. But the point is that the magnitude of the Mg binding can be very large, and the nucleating effect of some of these endogenous, anionic compounds could create complex dose-response relationships for something like Pi. The formation of heterogeneous precipitates of bilirubin and calcium phosphate could also explain the apparent "phosphate-sparing" effect of calcium phosphate, even though calcium phosphate is more or less insoluble (see Heaney, 2004). (The calcium phosphate could remain insoluble and promote the nucleation of complexes of calcium and bilirubin, thereby reducing the amount of calcium that would be available to bind to dietary phosphate. That could increase the amount of phosphate that would be available for absorption. I don't quite understand the stoichiometries of the binding of soluble calcium with bilirubin and calcium phosphate or magnesium phosphate to form insoluble, heterogeneous precipitates, but it's likely that no one understands those issues.)
Friday, September 18, 2009
Potential Interactions of Urate and Inorganic Phosphate with Xenobiotic Substrates and Physiological Substrates of Organic Anion Transporters
One thing I was going to mention is that an excessive intake of inorganic phosphate, alone or in combination with oral purine nucleotides, could conceivably interact with prescription or nonprescription drugs that are substrates of organic anion transporters (OAT's) or multidrug resistance (MDR) protein transporters. The main types of drug-drug interactions that are given attention in the literature are the interactions that involve the noncompetitive inhibition, induction, or competitive inhibition of cytochrome P450 enzymes. But another type of interaction that would be more difficult to predict or even measure could be the competition of two substrates for export, across the canalicular, or apical, membranes of biliary epithelial cells, into the bile. Urate and phosphate can compete for export into the blood or bile by OAT's on the plasma membranes of different cell types in the liver, and bilirubin (http://scholar.google.com/scholar?q=bilirubin+%22organic+anion%22&hl=en), bile acids (http://scholar.google.com/scholar?hl=en&q=%22bile+acids%22+%22organic+anion%22), and other compounds are also substrates of various OAT's. A lot of different drugs are also substrates of OAT's (http://scholar.google.com/scholar?q=drugs+transport+%22organic+anion%22&hl=en) and might compete with urate or phosphate or xanthine, for example, conceivably, for export into the bile. It's unlikely that these interactions would be significant, in my opinion, except at high or excessive dosages of uricogenic purines or inorganic phosphate or in people who have liver or kidney disease. As I've mentioned in past postings, however, some neuraminidase inhibitors and other drugs or metabolites of drugs that are excreted unchanged or otherwise eliminated primarily by renal excretion might interact more significantly with high dosages of oral purines or with excessive amounts of inorganic phosphate. The effect that could conceivably be problematic would be a slowing, in response to an increase in intracellular urate or phosphate, etc., of the rate of biliary or renal excretion of a given drug. That's one reason it's always necessary to discuss these things with one's doctor.
Nonetheless, urate has been used to treat various forms of liver disease in animal models [one example: Garcia-Ruiz et al., 2006: (http://www.ncbi.nlm.nih.gov/pubmed/16941682)], and researchers have shown that urate can protect against mitochondrial dysfunction induced by a wide variety of treatments that produce mitochondrial dysfunction by increasing peroxynitrite formation (http://scholar.google.com/scholar?hl=en&q=mitochondrial+peroxynitrite+urate+OR+uric). A lot of factors and disease states can increase peroxynitrite formation, and the "antioxidant" or "nitrosative-degradation-by-proxy," more accurately, effects of urate, along with its apparent capacity to decrease or directly inhibit PARP-1 activity, make it more useful than many other compounds or antioxidants, in my opinion. As I've discussed in past postings, it may well be advantageous for an antioxidant, such as urate, to not be regenerated. Nonetheless, urate can, for example, regenerate melatonin and guanosine radical species by apparently-nonenzymatic mechanisms (http://scholar.google.com/scholar?hl=en&q=melatonin+regeneration+urate). And, as far as the rest of this posting is concerned, there's evidence that hypophosphatemia and intracellular phosphate depletion in the liver may contribute to liver damage in some cases and disease states (see past postings). One of the most important considerations in the context of phosphate homeostasis is to be aware that, in my opinion, the "phosphate" contained in inositol hexakisphosphate and other phytate compounds, in cereal grains and "plant proteins," etc., is unlikely to provide much, if any, utilizable phosphate in humans [see here: (http://hardcorephysiologyfun.blogspot.com/2009/08/phytates-as-potentially-poor-sources-of.html); (http://hardcorephysiologyfun.blogspot.com/2009/07/phytates-inositol-hexaphosphate-and.html)]. As far as my own calculation of my "dietary phosphate" intake went, I didn't even bother to include a contribution of cereal-grain phosphate. I put a big "NOTH-THING" by the spot on the page for the mg phosphate derived from phytate-containing foods. But I can't make that determination or calculation for anyone except myself. If I had been in the business of obtaining "hocus-pocus-microbial-phytase-derived-phantom-phosphate" phosphate from foods, maybe I'd have listed an actual number. But anyway, as with any compound, bizzarely-high dosages could cause problems. In response to massive dosages of either uricogenic purines or inorganic phosphate, those problems could take the form of interactions with other OAT substrates.
Nonetheless, urate has been used to treat various forms of liver disease in animal models [one example: Garcia-Ruiz et al., 2006: (http://www.ncbi.nlm.nih.gov/pubmed/16941682)], and researchers have shown that urate can protect against mitochondrial dysfunction induced by a wide variety of treatments that produce mitochondrial dysfunction by increasing peroxynitrite formation (http://scholar.google.com/scholar?hl=en&q=mitochondrial+peroxynitrite+urate+OR+uric). A lot of factors and disease states can increase peroxynitrite formation, and the "antioxidant" or "nitrosative-degradation-by-proxy," more accurately, effects of urate, along with its apparent capacity to decrease or directly inhibit PARP-1 activity, make it more useful than many other compounds or antioxidants, in my opinion. As I've discussed in past postings, it may well be advantageous for an antioxidant, such as urate, to not be regenerated. Nonetheless, urate can, for example, regenerate melatonin and guanosine radical species by apparently-nonenzymatic mechanisms (http://scholar.google.com/scholar?hl=en&q=melatonin+regeneration+urate). And, as far as the rest of this posting is concerned, there's evidence that hypophosphatemia and intracellular phosphate depletion in the liver may contribute to liver damage in some cases and disease states (see past postings). One of the most important considerations in the context of phosphate homeostasis is to be aware that, in my opinion, the "phosphate" contained in inositol hexakisphosphate and other phytate compounds, in cereal grains and "plant proteins," etc., is unlikely to provide much, if any, utilizable phosphate in humans [see here: (http://hardcorephysiologyfun.blogspot.com/2009/08/phytates-as-potentially-poor-sources-of.html); (http://hardcorephysiologyfun.blogspot.com/2009/07/phytates-inositol-hexaphosphate-and.html)]. As far as my own calculation of my "dietary phosphate" intake went, I didn't even bother to include a contribution of cereal-grain phosphate. I put a big "NOTH-THING" by the spot on the page for the mg phosphate derived from phytate-containing foods. But I can't make that determination or calculation for anyone except myself. If I had been in the business of obtaining "hocus-pocus-microbial-phytase-derived-phantom-phosphate" phosphate from foods, maybe I'd have listed an actual number. But anyway, as with any compound, bizzarely-high dosages could cause problems. In response to massive dosages of either uricogenic purines or inorganic phosphate, those problems could take the form of interactions with other OAT substrates.
Wednesday, September 16, 2009
Crucially-Important and Mind-Bending Interactions in Phosphate and Magnesium Homeostasis
The authors of these articles [Thumfart et al., 2008: (http://www.ncbi.nlm.nih.gov/pubmed/18701629); Wei et al., 2006: (http://www.pdiconnect.com/cgi/reprint/26/3/366.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/16722031)] have discussed the evidence that serum magnesium (Mg) levels tend to be inversely correlated with serum parathyroid hormone (PTH) levels, and Wei et al. (2006) discussed all of the evidence that magnesium repletion can be protective against calcification and thrombosis and markers of cardiovascular disease. The inverse relationship between the serum Mg and PTH levels is not widely known in the literature, and I've only recently even seen research on it. It's really important, and Mg is just really important, in general, in my opinion. Most of the research and articles on Mg and PTH have focused on the hypocalcemic hypoparathyroidism that can occur in severe Mg deficiency, but repletion of Mg in severely-deficient animals or humans only increases serum calcium (Ca) back to normal levels, by increasing (restoring) the normal capacity of the parathyroid glands to release PTH in response to decreases in serum Ca. At serum Mg levels or dietary Mg supplies that are higher than those that are required for those most basic functions, Mg is thought to suppress PTH levels by acting as a "weak" activator, like a partial agonist, almost, of the calcium-sensing receptor(s) that mediate the suppression of PTH release in response to increases in serum Ca (Thumfart et al., 2008). Paradoxically, Mg can also increase urinary calcium excretion by reducing the reabsorption of calcium in the renal tubules (Thumfart et al., 2008). It's important to note that that increase in urinary Ca excretion would be likely to occur in conjunction with the decreases in the risk of nephrocalcinosis that researchers have generally found in response to increases in Mg intake. In the case of phosphate (Pi) repletion, the decreases in urinary Ca excretion are thought to result from the suppression of PTH-mediated bone resorption. Thus, Pi is thought to actually reduce the amount of Ca that is filtered in the glomeruli, and Mg may partially act by inhibiting Ca reabsorption in the renal tubules (proximal tubules and distal tubules). But the research I cited above suggests that Mg can increase the rate of urinary Ca excretion and even decrease the serum Ca levels and exert a concomitant, suppressive effect on PTH release. That's a really unusual set of effects. Pi can decrease serum Ca (an effect that is probably undesirable) and decrease urinary calcium excretion but can also elevate PTH levels, and that's an effect that could be attenuated, for better or worse, by an increase in Mg availability to the parathyroid glands or the Ca sensing proteins in the renal tubules, etc. Additionally, many of the bizarre derangements in the homeostatic regulation of Ca and Pi that have been found in response to long-term, excessive Pi supplementation could result, in some cases, from Mg depletion. I also get the general sense that a "high" Pi intake will tend to produce plasma volume expansion and lead to a reduction in urinary sodium excretion, and that could tend to oppose the natriuretic effect that high doses of Mg can sometimes produce. Another way of looking at it would be to say that a high or excessive Pi intake may produce plasma volume expansion by reducing Mg absorption or by increasing Mg turnover by other mechanisms, and Mg has sometimes produced low-level diuretic effects [Walker et al., 1998: (http://www.ncbi.nlm.nih.gov/pubmed/9861593?dopt=Abstract)], by mechanisms that aren't clear.
Incidentally, this is another article that includes a discussion of the antithrombotic effects that increases in Mg availability can produce [Maier et al., 2004: (http://www.ncbi.nlm.nih.gov/pubmed/15158909)], and I've been meaning to collect some of the articles that show the antithrombotic effects of Mg repletion or of elevations in the extracellular Mg levels (the steady-state, extracellular Mg levels are not necessarily or even usually going to be elevated much or at all, even in response to Mg supplementation that increases intracellular Mg levels).
What's really interesting is that low serum Mg and low serum Pi tend to go hand in hand and produce many of the same manifestations, including rhabdomyolysis and decreases in red blood cell (RBC) deformability and hemolytic anemia and decreases in RBC 2,3-DPG and ATP, etc. Oken et al. (1971) [Oken et al., 1971: (http://www.bloodjournal.org/cgi/reprint/38/4/468.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/5571433)] found that Mg deficiency caused hemolytic anemia, reticulocytosis in combination with erythroid hyperplasia in the bone marrow (basically meaning that some erythroid colony-forming units in the bone marrow may be enlarged and hyperresponsive to erythropoietin and that the immature RBC's are more numerous and are also undergoing apoptosis at a high rate, because of Mg depletion), decreases in serum phosphorus (and, hence, serum Pi, also), and decreases in the RBC 2,3-DPG and ATP concentrations and in the overall glycolytic activity in RBC's. Piomelli et al. (1973) [Piomelli et al., 1973: (http://www.bloodjournal.org/cgi/reprint/41/3/451.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/4690142)] also found hemolytic anemia in Mg depleted rats and cited research that had shown hypophosphatemia and hypomagnesemia to occur concomitantly in animals and humans.
It's also really important to note that a lot of articles have shown that Mg supplementation at dosages that would produce "desirable" effects, in my opinion, can decrease serum Pi or produce outright hypophosphatemia. And Pi supplementation can produce hypomagnesemia and intracellular Mg depletion. I think that the amounts of supplemental Mg that might be required to compensate for those effects of Pi repletion could be large and could be too high for many people to easily "accept." But, if the Mg is binding to Pi in the GI tract and precipitating, it's not going to be absorbed (there could be some solubilization in response to pH changes along the GI tract, but, for the most part, the precipitation is going to be permanent and is going to mean that the Mg and Pi are "lost"). One way of considering this would be to say that some percentage of a dose of Pi, from the diet or low-dose supplement, under a doctor's supervision, is going to be absorbed and some percentage is going to not be absorbed and probably bind some amounts of Mg and Ca. The intestinal absorption of Ca and the maintenance of serum Ca and renal Ca reabsorption are much more effectively maintained and regulated, in my opinion. Researchers have noted that the serum Ca is relatively stable, even in terms of circadian changes, than the serum Pi. The serum Pi fluctuates wildly throughout the day and in response to exercise, etc. I think the serum Mg levels are not as unstable as the serum Pi levels are, but the intracellular Mg concentrations are very easily depleted, such as in response to catecholaminergic stimulation, etc. Assuming one is using any supplemental Mg and Pi under a doctor's supervision, there's really a need to not be afraid to increase the supplemental intake of Mg, from Mg salts (such as magnesium hydroxide or magnesium oxide) slowly but relatively freely, to compensate for the reductions in absorption that are likely to result from the extra Pi. It wouldn't be a good idea to increase the Pi as freely as one might increase one's Mg intake, but part of the point of this is that an adequate degree of Mg availability is really obligatory for many of the effects of Pi repletion to be sustained in the longer term. One way to approach this type of problem would be to decide on some dosage of supplemental Mg that is tolerable and safe, under a doctor's supervision, and then to increase the ratio of Pi to Ca, assuming one would want to do this, in the first place. That would allow one to evaluate the effect of the Pi increase, from food or low-dose supplements, with the "knowledge" of the baseline effects that the initial Mg dosage produced. There's still a tendency for a lot of the research to focus on the most severe manifestations of the depletion of Mg or Pi or both (hypophosphatemia or hypomagnesemia), but intracellular Pi and Mg depletion tend to occur long before overt hypomagnesemia and hypophosphatemia occur. In any case, those articles I cited are just the tip of the iceberg. Resistance exercise that is done correctly, for example, can drastically deplete intracellular Mg and Pi concentrations, but the tendency has been to focus, in the case of Pi and RBC 2,3-DPG, on the short-term, post-exercise increases in RBC 2,3-DPG or serum Pi. But the more important issues have to do with the changes that occur in the days after the workout. It doesn't make sense to say that resistance exercise that correctly emphasizes the eccentric movement is going to increase RBC 2,3-DPG in the hours after exercise and then cause those levels and the intracellular Pi levels in skeletal or cardiac myocytes to also remain persistently elevated. Where would the Pi come from. It's likely that intense exercise can produce drastic depletions in intracellular Pi levels, but I haven't seen a lot of data on that. The increases in catecholaminergic transmission during resistance exercise would be expected to produce a significant depletion of intracellular Mg, and this has been shown to occur. And beta-adrenoreceptor agonists can produce hypophosphatemia and hypomagnesemia in the long term, etc. Even lowly L-methylfolate could reasonably be expected to increase Pi and Mg turnover, in my opinion, as a result of its apparent catecholaminergic effects.
Incidentally, this is another article that includes a discussion of the antithrombotic effects that increases in Mg availability can produce [Maier et al., 2004: (http://www.ncbi.nlm.nih.gov/pubmed/15158909)], and I've been meaning to collect some of the articles that show the antithrombotic effects of Mg repletion or of elevations in the extracellular Mg levels (the steady-state, extracellular Mg levels are not necessarily or even usually going to be elevated much or at all, even in response to Mg supplementation that increases intracellular Mg levels).
What's really interesting is that low serum Mg and low serum Pi tend to go hand in hand and produce many of the same manifestations, including rhabdomyolysis and decreases in red blood cell (RBC) deformability and hemolytic anemia and decreases in RBC 2,3-DPG and ATP, etc. Oken et al. (1971) [Oken et al., 1971: (http://www.bloodjournal.org/cgi/reprint/38/4/468.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/5571433)] found that Mg deficiency caused hemolytic anemia, reticulocytosis in combination with erythroid hyperplasia in the bone marrow (basically meaning that some erythroid colony-forming units in the bone marrow may be enlarged and hyperresponsive to erythropoietin and that the immature RBC's are more numerous and are also undergoing apoptosis at a high rate, because of Mg depletion), decreases in serum phosphorus (and, hence, serum Pi, also), and decreases in the RBC 2,3-DPG and ATP concentrations and in the overall glycolytic activity in RBC's. Piomelli et al. (1973) [Piomelli et al., 1973: (http://www.bloodjournal.org/cgi/reprint/41/3/451.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/4690142)] also found hemolytic anemia in Mg depleted rats and cited research that had shown hypophosphatemia and hypomagnesemia to occur concomitantly in animals and humans.
It's also really important to note that a lot of articles have shown that Mg supplementation at dosages that would produce "desirable" effects, in my opinion, can decrease serum Pi or produce outright hypophosphatemia. And Pi supplementation can produce hypomagnesemia and intracellular Mg depletion. I think that the amounts of supplemental Mg that might be required to compensate for those effects of Pi repletion could be large and could be too high for many people to easily "accept." But, if the Mg is binding to Pi in the GI tract and precipitating, it's not going to be absorbed (there could be some solubilization in response to pH changes along the GI tract, but, for the most part, the precipitation is going to be permanent and is going to mean that the Mg and Pi are "lost"). One way of considering this would be to say that some percentage of a dose of Pi, from the diet or low-dose supplement, under a doctor's supervision, is going to be absorbed and some percentage is going to not be absorbed and probably bind some amounts of Mg and Ca. The intestinal absorption of Ca and the maintenance of serum Ca and renal Ca reabsorption are much more effectively maintained and regulated, in my opinion. Researchers have noted that the serum Ca is relatively stable, even in terms of circadian changes, than the serum Pi. The serum Pi fluctuates wildly throughout the day and in response to exercise, etc. I think the serum Mg levels are not as unstable as the serum Pi levels are, but the intracellular Mg concentrations are very easily depleted, such as in response to catecholaminergic stimulation, etc. Assuming one is using any supplemental Mg and Pi under a doctor's supervision, there's really a need to not be afraid to increase the supplemental intake of Mg, from Mg salts (such as magnesium hydroxide or magnesium oxide) slowly but relatively freely, to compensate for the reductions in absorption that are likely to result from the extra Pi. It wouldn't be a good idea to increase the Pi as freely as one might increase one's Mg intake, but part of the point of this is that an adequate degree of Mg availability is really obligatory for many of the effects of Pi repletion to be sustained in the longer term. One way to approach this type of problem would be to decide on some dosage of supplemental Mg that is tolerable and safe, under a doctor's supervision, and then to increase the ratio of Pi to Ca, assuming one would want to do this, in the first place. That would allow one to evaluate the effect of the Pi increase, from food or low-dose supplements, with the "knowledge" of the baseline effects that the initial Mg dosage produced. There's still a tendency for a lot of the research to focus on the most severe manifestations of the depletion of Mg or Pi or both (hypophosphatemia or hypomagnesemia), but intracellular Pi and Mg depletion tend to occur long before overt hypomagnesemia and hypophosphatemia occur. In any case, those articles I cited are just the tip of the iceberg. Resistance exercise that is done correctly, for example, can drastically deplete intracellular Mg and Pi concentrations, but the tendency has been to focus, in the case of Pi and RBC 2,3-DPG, on the short-term, post-exercise increases in RBC 2,3-DPG or serum Pi. But the more important issues have to do with the changes that occur in the days after the workout. It doesn't make sense to say that resistance exercise that correctly emphasizes the eccentric movement is going to increase RBC 2,3-DPG in the hours after exercise and then cause those levels and the intracellular Pi levels in skeletal or cardiac myocytes to also remain persistently elevated. Where would the Pi come from. It's likely that intense exercise can produce drastic depletions in intracellular Pi levels, but I haven't seen a lot of data on that. The increases in catecholaminergic transmission during resistance exercise would be expected to produce a significant depletion of intracellular Mg, and this has been shown to occur. And beta-adrenoreceptor agonists can produce hypophosphatemia and hypomagnesemia in the long term, etc. Even lowly L-methylfolate could reasonably be expected to increase Pi and Mg turnover, in my opinion, as a result of its apparent catecholaminergic effects.
Tuesday, September 15, 2009
Phosphate as a "Magnesium-Binder"
I was looking at the amounts of calcium that can bind to a given amount of dietary phosphate and form an insoluble precipitate [Heaney, 2004: (http://www.mayoclinicproceedings.com/content/79/1/91.full.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/14708952)]. That quantitative relationship is likely to be important in determining the amounts of supplemental magnesium that would, in theory, in my opinion, be required to compensate for the formation of insoluble complexes of magnesium and phosphate (Pi) in the GI tract. Magnesium is probably just as effective as calcium (a recent article shows it to be more "effective" than calcium) as a "phosphate binder" [here are some of the older articles: (http://scholar.google.com/scholar?q=%22Long-term+use+of+magnesium+hydroxide+as+a+phosphate+binder+in+patients+on+hemodialysis%22&hl=en)], but, assuming magnesium and calcium are equipotent as phosphate binders, the quantitative relationship that Heaney (2004) mentioned would mean that 1,000 mg of supplemental phosphate (I'm assuming phosphorus means phosphate, given the convention) could theoretically bind up to 1,827 mg of magnesium (or 3012 mg calcium). In reality, it wouldn't be that high, and the articles on phosphate binding (it's used to treat hyperphosphatemia in people who have renal failure) discuss those discrepancies between in vitro data and in vivo data, etc. And the effect could be minimized through the use of chelated magnesium asparate, assuming one can tolerate it, or by separating the phosphate intake from the magnesium intake or by using organic phosphate compounds, such as ATP or fructose-1,6-diphosphate (if those phosphorylated sugar compounds were actually available). The organic phosphate compounds can be absorbed intact, to some extent, and any complexes formed with magnesium would still be soluble (in all likelihood, in my opinion), meaning that the complex could also be absorbed intact by solvent-drag-facilitated passive diffusion, etc.
But the point is that, in the case of magnesium oxide, there could conceivably be a really significant reduction in the absorption of magnesium. I think it would probably be more pronounced with something like disodium phosphate than with ATP or dietary phosphate. There's a lot of research in this area, and it's really interesting. In any case, it's almost impossible for a person who does not have kidney failure to become hypermagnesemic, and a couple of those articles discussed the use of dosages of up to 3000 mg of magnesium hydroxide, which is about 42 percent elemental magnesium (966-1260 mg elemental Mg), as a phosphate binder in people with renal failure. Those dosages didn't cause hypermagnesemia, even in people with renal failure. But the main concern is not necessarily the potential for hypermagnesemia, in my opinion, but the disturbances in electrolytes or in nerve fiber conduction or in the short-term regulation of blood pressure, etc., in susceptible individuals, and so one would want to discuss these things with one's doctor. The main thing is to be aware of the potentially large magnitude of the "interaction" with magnesium oxide or other magnesium salts. A lot of the magnesium might not be absorbed. That's what I mean when I refer to an "interaction." Chelated magnesium aspartate (this is a chelated form that has been researched a lot and that doesn't provide massive amounts of glycine) is thought to be largely absorbed intact, through dipeptide transporters or passive diffusion, and its solubility in chelated form would probably prevent it from precipitating with phosphate. It's conceivable that there could still be some binding to phosphate or pyrophosphate, but, anyway, "it's a wrap" for tonight: quantitative estimate of the maximal magnesium-binding capacity of phosphate in the intestinal luminal fluid.
But the point is that, in the case of magnesium oxide, there could conceivably be a really significant reduction in the absorption of magnesium. I think it would probably be more pronounced with something like disodium phosphate than with ATP or dietary phosphate. There's a lot of research in this area, and it's really interesting. In any case, it's almost impossible for a person who does not have kidney failure to become hypermagnesemic, and a couple of those articles discussed the use of dosages of up to 3000 mg of magnesium hydroxide, which is about 42 percent elemental magnesium (966-1260 mg elemental Mg), as a phosphate binder in people with renal failure. Those dosages didn't cause hypermagnesemia, even in people with renal failure. But the main concern is not necessarily the potential for hypermagnesemia, in my opinion, but the disturbances in electrolytes or in nerve fiber conduction or in the short-term regulation of blood pressure, etc., in susceptible individuals, and so one would want to discuss these things with one's doctor. The main thing is to be aware of the potentially large magnitude of the "interaction" with magnesium oxide or other magnesium salts. A lot of the magnesium might not be absorbed. That's what I mean when I refer to an "interaction." Chelated magnesium aspartate (this is a chelated form that has been researched a lot and that doesn't provide massive amounts of glycine) is thought to be largely absorbed intact, through dipeptide transporters or passive diffusion, and its solubility in chelated form would probably prevent it from precipitating with phosphate. It's conceivable that there could still be some binding to phosphate or pyrophosphate, but, anyway, "it's a wrap" for tonight: quantitative estimate of the maximal magnesium-binding capacity of phosphate in the intestinal luminal fluid.
Sunday, September 13, 2009
Stimulation of Respiration and Activation of TCA Cycle Enzymes by Inorganic Phosphate in Isolated Mitochondria
In this article [Bose et al., 2003: (http://www.jbc.org/cgi/reprint/278/40/39155)(http://www.ncbi.nlm.nih.gov/pubmed/12871940)], Bose et al. (2003) found that inorganic phosphate (Pi) increased the respiratory rate in the isolated mitochondria from pigs' skeletal muscle cells and cardiac muscle cells, and the authors used the increase in the rate of NADH generation as an important indication that the respiratory rate had increased. The increase in NADH generation occurred in the presence of uncouplers (compounds that uncouple the redox reactions of the multienzyme complexes in the electron transport chain with the generation of ATP by the F1F0-ATPase protein) [see here for discussion: (http://hardcorephysiologyfun.blogspot.com/2009/05/cytosolic-redox-potential-and-proton.html)], and Bose et al. (2003) noted that the Pi-induced increases in the rate of NADH generation were likely to have been a result of the "global" activation of various or numerous NAD(+)-dependent dehydrogenase enzymes or enzyme complexes by Pi. The authors cited research, on p. 39161, showing that Pi can activate the NADH-generating TCA cycle enzymes 2-oxoglutarate dehydrogenase, NAD-dependent isocitrate dehydrogenase (this is not the same as the NADP-dependent isocitrate dehydrogenase enzyme). The NADH is then more or less immediately oxidized to NAD+ by respiratory chain enzymes, assuming there's enough oxygen and the mitochondria have not been damaged, etc., and numerous TCA cycle dehydrogenase enzymes either bind complex I, a multienzyme respiratory chain complex that oxidizes NADH formed by TCA cycle enzymes, and channel NADH to complex I or are functionally coupled to complex I activity less directly [Sumegi and Srere, 1984: (http://www.jbc.org/cgi/reprint/259/24/15040.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/6439716)]. Bose et al. (2003) also argued, on p. 39162, that the way in which Pi appears to regulate respiration, by multiple mechanisms, might mean that Pi could exert an antioxidant function
["the generation of free radicals in the mitochondria may be minimized" (Bose et al., 2003, p. 39162], but the authors also cited, on p. 39163 (reference 35), research implying that Pi could exacerbate the augmentation of the rate of free-radical formation following ischemia. I'd wonder what the concentrations used by the authors might have been, in some of those articles cited, because I've seen cell-culture studies showing effects of Pi that don't make sense to me and use supraphysiological concentrations of Pi, show proapoptotic or toxic effects of massive concentrations of Pi, or contrast, in ways that may lack physiological relevance, the effects of excesses of Pi with the supposed protective effects of various drugs, etc. That said, I do think Pi could affect mitochondrial functioning in ways that are not desirable, but it's noteworthy, as Bose et al. (2003) intimated, that ischemia and other forms of metabolic stress can cause Pi to be released during the degradation of phosphocreatine and could derange mitochondrial Pi homeostasis in ways that would be more significant than the ways in which increases in Pi availability would be likely to derange Pi homeostasis. One is unlikely to be able to "hide" from ischemia-induced, wild extremes in mitochondrial Pi influx by restricting dietary Pi, for example, because Pi depletion has the potential to exacerbate those "wild swings" in Pi availability by causing hypoxia, ATP depletion, hemolysis, rhabdomyolysis, etc., in my opinion. But it's worth noting that excesses of intracellular, free Pi could produce adverse effects on mitochondrial functioning.
Bose et al. (2003) also cited research showing that the transport of Pi across the inner mitochondrial membrane is likely to influence the pH gradient across the inner mitochondrial membrane, given that Pi transport appears to be coupled to OH(-) or H(+) transport, and that Pi is used a substrate in the phosphorylation of ADP by the F1F0-ATPase protein. I don't know if there's an enzyme-bound intermediate that's formed from Pi and that contains a hydrolyzable phosphodiester bond, etc. It looks like it wasn't known, as of 2000 [Vinogradov, 2000: (http://jeb.biologists.org/cgi/reprint/203/1/41.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/10600672)]. That's remarkable. That's a terrific article, though, by Vinogradov (2000), and the most important piece of information in there is probably the statement on p. 44 that F1F0-ATPase is "activated" by a free magnesium ion, meaning free Mg(2+), and doesn't just depend on magnesium bound to adenosine nucleotides, as MgATP(2-) and MgADP(-). That could be really important, and it's basically like saying it's a catalytic magnesium ion. That could conceivably help to explain some of the supposed neuroprotective effects of magnesium and could also explain some of its apparent effects on exercise performance, etc. Magnesium also generally enhances overall glycolytic activity and creatine kinase activity, and those effects, along with its purine nucleotide-buffering effects (preventing the loss of adenosine nucleotides, etc.), could also be relevant in those contexts. Never mind that there's an incorrect assumption, in most articles and textbooks, that the intracellular magnesium concentration is high enough to bind to all of the available free ATP and ADP, etc. That's not likely to be true in 99.9 percent of people, and, in my opinion, the activities of many enzymes that contain binding sites for catalytic magnesium ions are likely to be sensitive to changes in magnesium status. Vinogradov (2000) also noted that magnesium is likely to also bind Pi, probably more loosely than magnesium binds some of its other substrates and regulatory factors. I've seen that mentioned in other articles, including the article by Bose et al. (2003), and Bose et al. (2003) cited research, on the first page of their article, describing the capacity of Pi to bind calcium and magnesium (I'm assuming they're talking about reversible binding, in the context of the regulation of intramitochondrial free calcium by its complexation with orthophosphate, etc.) and influence their effects on respiration. I'm not sure what the mechanism is by which Pi activates the TCA cycle enzymes, but I'll have to read on that. Maybe it's partially a result of allosteric effects, and maybe some of those allosteric effects are a result of Pi-induced changes in Ca(2+) binding to the enzymes or enzyme complexes, etc. I'll have to look at some of those articles.
["the generation of free radicals in the mitochondria may be minimized" (Bose et al., 2003, p. 39162], but the authors also cited, on p. 39163 (reference 35), research implying that Pi could exacerbate the augmentation of the rate of free-radical formation following ischemia. I'd wonder what the concentrations used by the authors might have been, in some of those articles cited, because I've seen cell-culture studies showing effects of Pi that don't make sense to me and use supraphysiological concentrations of Pi, show proapoptotic or toxic effects of massive concentrations of Pi, or contrast, in ways that may lack physiological relevance, the effects of excesses of Pi with the supposed protective effects of various drugs, etc. That said, I do think Pi could affect mitochondrial functioning in ways that are not desirable, but it's noteworthy, as Bose et al. (2003) intimated, that ischemia and other forms of metabolic stress can cause Pi to be released during the degradation of phosphocreatine and could derange mitochondrial Pi homeostasis in ways that would be more significant than the ways in which increases in Pi availability would be likely to derange Pi homeostasis. One is unlikely to be able to "hide" from ischemia-induced, wild extremes in mitochondrial Pi influx by restricting dietary Pi, for example, because Pi depletion has the potential to exacerbate those "wild swings" in Pi availability by causing hypoxia, ATP depletion, hemolysis, rhabdomyolysis, etc., in my opinion. But it's worth noting that excesses of intracellular, free Pi could produce adverse effects on mitochondrial functioning.
Bose et al. (2003) also cited research showing that the transport of Pi across the inner mitochondrial membrane is likely to influence the pH gradient across the inner mitochondrial membrane, given that Pi transport appears to be coupled to OH(-) or H(+) transport, and that Pi is used a substrate in the phosphorylation of ADP by the F1F0-ATPase protein. I don't know if there's an enzyme-bound intermediate that's formed from Pi and that contains a hydrolyzable phosphodiester bond, etc. It looks like it wasn't known, as of 2000 [Vinogradov, 2000: (http://jeb.biologists.org/cgi/reprint/203/1/41.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/10600672)]. That's remarkable. That's a terrific article, though, by Vinogradov (2000), and the most important piece of information in there is probably the statement on p. 44 that F1F0-ATPase is "activated" by a free magnesium ion, meaning free Mg(2+), and doesn't just depend on magnesium bound to adenosine nucleotides, as MgATP(2-) and MgADP(-). That could be really important, and it's basically like saying it's a catalytic magnesium ion. That could conceivably help to explain some of the supposed neuroprotective effects of magnesium and could also explain some of its apparent effects on exercise performance, etc. Magnesium also generally enhances overall glycolytic activity and creatine kinase activity, and those effects, along with its purine nucleotide-buffering effects (preventing the loss of adenosine nucleotides, etc.), could also be relevant in those contexts. Never mind that there's an incorrect assumption, in most articles and textbooks, that the intracellular magnesium concentration is high enough to bind to all of the available free ATP and ADP, etc. That's not likely to be true in 99.9 percent of people, and, in my opinion, the activities of many enzymes that contain binding sites for catalytic magnesium ions are likely to be sensitive to changes in magnesium status. Vinogradov (2000) also noted that magnesium is likely to also bind Pi, probably more loosely than magnesium binds some of its other substrates and regulatory factors. I've seen that mentioned in other articles, including the article by Bose et al. (2003), and Bose et al. (2003) cited research, on the first page of their article, describing the capacity of Pi to bind calcium and magnesium (I'm assuming they're talking about reversible binding, in the context of the regulation of intramitochondrial free calcium by its complexation with orthophosphate, etc.) and influence their effects on respiration. I'm not sure what the mechanism is by which Pi activates the TCA cycle enzymes, but I'll have to read on that. Maybe it's partially a result of allosteric effects, and maybe some of those allosteric effects are a result of Pi-induced changes in Ca(2+) binding to the enzymes or enzyme complexes, etc. I'll have to look at some of those articles.
Saturday, September 12, 2009
Phosphate (Pi) Sequestration by Fructose; Potential Effects of Changes in Pi Availability on the Mitochondrial Proton Gradient and on XDH Activity
This is one of the other articles that includes a discussion of the mechanisms by which fructose acutely increases plasma uridine and also urinary uridine excretion [Yamamoto et al., 1997: (http://www.ncbi.nlm.nih.gov/pubmed/9160822)], but Yamamoto et al. (1997) didn't show the decreases in plasma uridine, to levels below the baseline concentrations, that occur after the increases (see a recent posting). Yamamoto et al. (1997) also didn't address the mechanism by which the fructose-induced inorganic phosphate (Pi) sequestration leads to purine degradation, but a key mechanism is that the decrease in intracellular Pi disinhibits adenosine monophosphate (AMP) deaminase. AMP deaminase is normally inhibited by Pi. Yamamoto et al. (1997) cited a lot of interesting research, however. They suggested that the ethanol-induced (and, by less direct mechanisms, fructose-induced) increases in hypoxanthine and xanthine might have resulted from the elevations in the cytosolic NADH/NAD+ ratio that results from the metabolism of ethanol to acetaldehyde, given that NADH inhibits xanthine dehydrogenase activity. Fructose could also produce that effect, albeit to a lesser extent than ethanol. In addition to the ATP depletion that ultimately can occur through the disinhibition of AMP deaminase, resulting from fructose-induced Pi sequestration, Yamamoto et al. (1997) referred to the direct consumption of ATP in the fructokinase reaction that forms fructose-1-phosphate and thereby sequesters Pi [see also Phillips and Davies, 1985: (http://jp.physoc.org/content/520/3/909.full)(http://www.ncbi.nlm.nih.gov/pubmed/2992452)]. It's worth noting that fructose also depletes guanosine triphosphate (and guanosine nucleotides in general, as shown in multiple articles), partly because fructokinase activity is apparently GTP-dependent (Phillips and Davies, 1985). Fantastic. It depletes all the major nucleotide pools. Cytidine depletion would also be expected to occur (I'll bet there's some research showing that, too), given that cytidine is formed from uridine. But the point I was going to make is that changes in intracellular Pi could regulate xanthine dehydrogenase activity by buffering the intracellular pH, given that increases in the intracellular pH tend to activate phosphofructokinase and glycolytic activity overall. That increase in glycolysis would then increase the NADH/NAD+ ratio and reduce xanthine dehydrogenase activity, and that could conceivably allow for more salvage of hypoxanthine (and even xanthine, which can be salvaged to a minimal extent by a two-enzyme pathway). Yamamoto et al. (1997) cited research showing that lactate can decrease the rate of urinary uric acid excretion but apparently doesn't reduce the excretion of hypoxanthine or xanthine [the oxypurines that Yamamoto et al. (1997) are referring to]. Does Pi repletion increase or decrease ischemia-induced glycolytic activity? Pi repletion generally does increase the activities of glycolytic enzymes, in many of the articles I've seen, but it could also reduce the kinds of wild fluctuations in the intracellular pH that can occur during ischemia. The Pi-induced increases in glycolytic activity by allosteric mechanisms could increase the cytosolic NADH/NAD+ ratio [Zhou et al., 2005: (http://jp.physoc.org/content/569/3/925.full.pdf+html)(http://www.ncbi.nlm.nih.gov/pubmed/16223766?dopt=Abstract)] and inhibit xanthine dehydrogenase activity (meaning that, from a simplistic standpoint, that effect could decrease uric acid formation and enhance purine salvage, conceivably), and, in the absence of a high intake of a phosphate salt displaying an abnormal ratio of monobasic to dibasic orthophosphate (orthophosphate refers to [HPO4(2-) + H2PO4(-) + the less-than-1-% contribution of PO4(3-)]), Pi repletion can produce an alkalinizing effect that could also activate glycolysis and further reduce xanthine dehydrogenase activity. But it could also exert more of a neutral effect. Those are just speculative thoughts.
For that matter, I wonder if the alkalotic effects of excesses of Pi might abolish or decrease the mitochondrial proton gradient under some circumstances, by mimicking the effects of uncouplers. Pi could conceivably stimulate respiration by that mechanism [that commonly occurs as a compensatory response (http://scholar.google.com/scholar?hl=en&q=stimulate+uncoupler+mitochondrial+respiration)], and that could explain those articles I cited, in a past posting, showing that Pi can increase the postprandial metabolic rate in humans, etc. That could conceivably account for some of its supposed psychiatric or psychoactive effects, and the "pseudodepression" and other effects of Pi depletion could be due to the poor "regulation" of the mitochondrial membrane potential. There are all sorts of articles showing that the stimulation of respiration is associated with phosphate influx into mitochondria, and phosphate influx interacts with ADP-stimulated respiration, etc. The point is that the effects of different concentrations of intracellular or intramitochondrial Pi on respiration could conceivably be either "bad" or "good," depending on the way you look at the effects.
It would be interesting to see some in vivo research on the effects of Pi depletion or repletion on the exercise-induced loss of purine nucleotides, for example, because it could be a complex set of effects. It's interesting that Hellsten et al. (1999) [Hellsten et al., 1999: (http://jp.physoc.org/content/520/3/909.full)(http://www.ncbi.nlm.nih.gov/pubmed/10545153?dopt=Abstract)] argued that the initial effect of exercise had been to increase Pi availability, thereby inhibiting AMP deaminase activity, but that the decreases in intracellular pH that had subsequently occurred had activated AMP deaminase activity. It's interesting that an increase in the inhibition of AMP deaminase by Pi would tend to lead to a relative increase in adenosine availability, and some of that adenosine would presumably serve to increase blood flow to the exercising muscles. I wonder if that increase could lead to a greater loss of adenosine, however, or if the Pi-mediated inhibition of AMP deaminase activity (as in the endothelial cells in which much of the adenosine deaminase-mediated deamination of interstitial-fluid adenosine occurs) would mean that more adenosine could be released and then also salvaged. The intracellular and extracellular adenosine concentrations are not usually very different, and there's a slight, inwardly-directed, transmembrane adenosine gradient. Usually, one thinks of adenosine release as being a unidirectional process that's "coupled" to an increase in the degradation, by adenosine deaminase in endothelial cells, of the adenosine to inosine and hypoxanthine. But, presumably, that's not always going to be the case. It's interesting that uncouplers are used to increase extracellular adenosine concentrations [see the reference to "respiratory uncouplers" on the first page of Rubio et al., 1972: (http://www.ncbi.nlm.nih.gov/pubmed/5022662)], and my overall point is that excessive concentrations of intracellular Pi, to the extent that they are achievable, could conceivably have some adverse effects that would go beyond the well-known increases in the risk of calcification, etc.
For that matter, I wonder if the alkalotic effects of excesses of Pi might abolish or decrease the mitochondrial proton gradient under some circumstances, by mimicking the effects of uncouplers. Pi could conceivably stimulate respiration by that mechanism [that commonly occurs as a compensatory response (http://scholar.google.com/scholar?hl=en&q=stimulate+uncoupler+mitochondrial+respiration)], and that could explain those articles I cited, in a past posting, showing that Pi can increase the postprandial metabolic rate in humans, etc. That could conceivably account for some of its supposed psychiatric or psychoactive effects, and the "pseudodepression" and other effects of Pi depletion could be due to the poor "regulation" of the mitochondrial membrane potential. There are all sorts of articles showing that the stimulation of respiration is associated with phosphate influx into mitochondria, and phosphate influx interacts with ADP-stimulated respiration, etc. The point is that the effects of different concentrations of intracellular or intramitochondrial Pi on respiration could conceivably be either "bad" or "good," depending on the way you look at the effects.
It would be interesting to see some in vivo research on the effects of Pi depletion or repletion on the exercise-induced loss of purine nucleotides, for example, because it could be a complex set of effects. It's interesting that Hellsten et al. (1999) [Hellsten et al., 1999: (http://jp.physoc.org/content/520/3/909.full)(http://www.ncbi.nlm.nih.gov/pubmed/10545153?dopt=Abstract)] argued that the initial effect of exercise had been to increase Pi availability, thereby inhibiting AMP deaminase activity, but that the decreases in intracellular pH that had subsequently occurred had activated AMP deaminase activity. It's interesting that an increase in the inhibition of AMP deaminase by Pi would tend to lead to a relative increase in adenosine availability, and some of that adenosine would presumably serve to increase blood flow to the exercising muscles. I wonder if that increase could lead to a greater loss of adenosine, however, or if the Pi-mediated inhibition of AMP deaminase activity (as in the endothelial cells in which much of the adenosine deaminase-mediated deamination of interstitial-fluid adenosine occurs) would mean that more adenosine could be released and then also salvaged. The intracellular and extracellular adenosine concentrations are not usually very different, and there's a slight, inwardly-directed, transmembrane adenosine gradient. Usually, one thinks of adenosine release as being a unidirectional process that's "coupled" to an increase in the degradation, by adenosine deaminase in endothelial cells, of the adenosine to inosine and hypoxanthine. But, presumably, that's not always going to be the case. It's interesting that uncouplers are used to increase extracellular adenosine concentrations [see the reference to "respiratory uncouplers" on the first page of Rubio et al., 1972: (http://www.ncbi.nlm.nih.gov/pubmed/5022662)], and my overall point is that excessive concentrations of intracellular Pi, to the extent that they are achievable, could conceivably have some adverse effects that would go beyond the well-known increases in the risk of calcification, etc.
Thursday, September 10, 2009
Depletion of Intracellular Uridine in Response to Intracellular Phosphate Depletion: Potential Relevance to mtDNA & Nuclear DNA Turnover and Repair
In this article [Makras et al., 2008: (http://www.ncbi.nlm.nih.gov/pubmed/18252791)], Makras et al. (2008) described a person who had X-linked hypophosphatemic rickets (XLHR), a genetic disorder that impairs the reabsorption of phosphate, from the tubular fluid, in the proximal tubules, and in whom roughly seven years of phosphate supplementation was ultimately required to completely ameliorate his myopathy (muscle weakness, etc.). The authors noted the mysterious quality of the myopathy and their finding that the severity of the myopathy had generally been independent of the person's serum phosphate levels. The authors also noted that the myopathy had been worsened during periods of vitamin D intoxication. I'm not sure if they're talking about calcitriol or vitamin D, but it probably doesn't matter, to some extent. Hypercalcemia could conceivably result from supplementation with either vitamin D (at the high doses used in patients with XLHR) or calcitriol and could cause excessive calcium influx into myocytes, thereby impairing mitochondrial ATP formation, or cause hypercoagulability, etc.
Although the authors wrote that vitamin D usually causes rapidly-emerging improvements in muscle weakness in people who do not have inherited mutations that affect phosphate homeostasis, as in XLHR, it's conceivable to me that the myopathy could have resulted from mitochondrial dysfunction, perhaps as a result of acquired mitochondrial DNA (mtDNA) mutations, as a consequence of the phosphate depletion. The fact that the degree of muscle weakness was independent of the serum phosphate is not surprising, and the intracellular phosphate levels are known to frequently, if not generally, be independent of steady-state serum phosphate levels in normal humans given low dosages of supplemental phosphate. Given that intracellular phosphate depletion is known to deplete ATP and purine nucleotide pools and that the depletion of the pools of purine deoxyribonucleotides can impair mtDNA replication (see past postings), it's conceivable that intracellular phosphate depletion could impair DNA repair and lead to a gradual accumulation of mtDNA or even nuclear DNA mutations. It's worthwhile to note that the maintenance of an adequate pool of each of the major intracellular purine nucleotides is a prerequisite for the maintenance of pyrimidine salvage. I think some of that has been shown in the context of fructose-induced hepatic ATP depletion. I think researchers have shown that fructose can deplete uridine from the liver and transiently elevate plasma uridine, as one might expect in response to fructose loading. Here are some references on that [see page 33 of the chapter of the book by Davies et al., 1998, who found that the plasma uridine levels increased soon after fructose administration in humans and then decreased a lot by 4 hours after a meal; sounds fantastic: (http://scholar.google.com/scholar?q=fructose+uridine+plasma+OR+serum&hl=en)].
Thus, intracellular phosphate depletion could conceivably contribute to the development of mutations in nuclear DNA and to the development of some of those more severe myopathies or intractable disease states, such as chronic fatigue syndrome, by leading to a depletion of both purines and pyrimidines. That's just my opinion, however. It's noteworthy that DNA repair consumes a lot of ATP, and some authors have suggested, as I noted in my old folic acid paper (see past posting), that the depletion of intracellular total folates might cause apoptotic cell death in neurons by "DNA-repair-associated" ATP depletion. They meant that there would be a futile cycle of DNA damage, in response to folate depletion and increases in the dUMP/dTMP ratio, and DNA repair and that the DNA repair would ultimately consume so much ATP as to lead to apoptotic cell death, such as in response to ischemic episodes or strokes that can cause a lot of DNA damage. Davies et al. (1998) argued that fructose-induced phosphate depletion in the liver had caused both the purine depletion, as evidenced by the elevations in serum uric acid, and the uridine export from the liver. I'm not suggesting that more is always going to be better, and those articles on the overlapping mechanisms governing the efflux of uric acid and inorganic phosphate, as discussed in recent postings, suggest that the metabolic cost or competitive inhibitory effects of excesses of intracellular inorganic phosphate could become significant, past a certain point, and derange the transport of organic anions other than uric acid, etc. Although the research suggests that a lot of phosphate would be required to create that type of state, it's worthwhile to discuss these things with one's doctor.
My view is that the data on the dosages of phosphate used in people with XLHR (and in other genetic disorders that reduce phosphate reabsorption) is relevant to normal humans, with regard to the risk of nephrocalcinosis, but I can think of a number of possible objections to that view. The first would be that, in people with XLHR, the rate of phosphate (Pi) reabsorption would be lower than it would in normal people and that that would decrease the risk, in comparison to normal people, of intracellular calcium phosphate precipitation. I should mention that, in that long review on nephrocalcinosis that I recently discussed, the author noted that calcification can occur either extracellularly (and "luminally" or intraluminally), on the luminal membranes of the proximal or distal tubule cells or in the interstices of the tight junctions, or intracellularly, in the cells of the renal tubules. Thus, one could argue that normal people would have the same risk of intraluminal calcification as people who have XLHR would but that normal people would have a higher risk of intracellular calcification, in response to a given dosage of supplemental phosphate, as people who have XLHR would. In normal people, however, the proximal tubules are able to vary the percent reabsorption to between something like 80 and 99 percent, and that means there would be a lot of potential for the proximal tubules to increase the urinary excretion of phosphate in response to some dosage of supplemental Pi. I just think the risks are basically similar for normal people as they are for people who have XLHR.
The reason I'm focusing on the reabsorption is that XLHR doesn't affect the glomerular filtration of serum phosphate, except to the extent that the cells of people with XLHR might be more "hungry" for phosphate and might clear the serum phosphate, from a dosage of phosphate, more rapidly than a normal person's cells might (thereby producing an indirect decrease in the amounts of phosphate filtered per unit time). In other words, the acute elevations in serum phosphate could conceivably be larger in normal people than in people who have XLHR. But that presupposes that a person has no capacity to tell if some change in his or her phosphate to calcium intake ratio, for example, is producing any benefit. If there's no obvious benefit, presumably there wouldn't be an impetus to continue taking any reasonable amount of phosphate, with the approval of one's doctor. It would also, obviously, be important to spread the dosage out across the day as much as possible and to consider limiting any dosage of supplemental vitamin D to 2000-4000 IU or less, given that hypercalciuria is thought to be a major factor that can increase the risk of nephrocalcinosis. Some authors have suggested splitting the total daily dosage of phosphate, in people who have XLHR, into 8 dosages, spread out across the day, instead of the usual practice of splitting the dosage into 4-5 increments. Another objection I can think of would be that the PHEX protein or the Na(+)/Pi cotransporter might be expressed in myocytes or myogenic satellite cells or some other extrarenal cell type. That could mean that mtDNA replication or some other Pi-sensitive metabolic process would be specifically affected in the muscles and would not be likely to show up in normal people. But I don't see how a pure and severe case of Fanconi's syndrome couldn't produce the same kinds of long-term problems in postmitotic cell types as something like XLHR can. It probably wouldn't take 7 years to treat the problem in a normal person, but I just think that there's a need to think of this type of thing with the long view in mind. It's necessary for someone to do long-term safety research using supplemental phosphate in normal people and to use reasonable amounts of dietary calcium, etc. Or someone could do that type of research in people who have chronic fatigue syndrome. I don't know what the best approach would be. One could argue that reasonable and low dosages of phosphate would improve both purine and pyrimidine salvage and could help limit something like the age-associated reductions in mtDNA copy number in different cell types. These are just my off-the-cuff thoughts, but I think the notion that 7 days of "phosphate loading" is enough to make anyone "A-okay," in view of the mechanisms by which both the purine and pyrimidine ribonucleotide pools could become depleted intracellularly, for example, doesn't make a whole lot of sense to me. If the intracellular phosphate depletion is brief, then it makes sense to me that a brief period of time would be required to correct that depletion. But one is not even going to be able to tell if the intracellular phosphate levels are being maintained in some cases, given the frequently-observed independence of the intracellular and extracellular phosphate concentrations. So someone would have to do muscle biopsies or use 31P-MRS intermittently or measure red blood cell 2,3-diphosphoglycerate levels as a surrogate for the measurement of the intracellular Pi levels in myocytes, etc.
Although the authors wrote that vitamin D usually causes rapidly-emerging improvements in muscle weakness in people who do not have inherited mutations that affect phosphate homeostasis, as in XLHR, it's conceivable to me that the myopathy could have resulted from mitochondrial dysfunction, perhaps as a result of acquired mitochondrial DNA (mtDNA) mutations, as a consequence of the phosphate depletion. The fact that the degree of muscle weakness was independent of the serum phosphate is not surprising, and the intracellular phosphate levels are known to frequently, if not generally, be independent of steady-state serum phosphate levels in normal humans given low dosages of supplemental phosphate. Given that intracellular phosphate depletion is known to deplete ATP and purine nucleotide pools and that the depletion of the pools of purine deoxyribonucleotides can impair mtDNA replication (see past postings), it's conceivable that intracellular phosphate depletion could impair DNA repair and lead to a gradual accumulation of mtDNA or even nuclear DNA mutations. It's worthwhile to note that the maintenance of an adequate pool of each of the major intracellular purine nucleotides is a prerequisite for the maintenance of pyrimidine salvage. I think some of that has been shown in the context of fructose-induced hepatic ATP depletion. I think researchers have shown that fructose can deplete uridine from the liver and transiently elevate plasma uridine, as one might expect in response to fructose loading. Here are some references on that [see page 33 of the chapter of the book by Davies et al., 1998, who found that the plasma uridine levels increased soon after fructose administration in humans and then decreased a lot by 4 hours after a meal; sounds fantastic: (http://scholar.google.com/scholar?q=fructose+uridine+plasma+OR+serum&hl=en)].
Thus, intracellular phosphate depletion could conceivably contribute to the development of mutations in nuclear DNA and to the development of some of those more severe myopathies or intractable disease states, such as chronic fatigue syndrome, by leading to a depletion of both purines and pyrimidines. That's just my opinion, however. It's noteworthy that DNA repair consumes a lot of ATP, and some authors have suggested, as I noted in my old folic acid paper (see past posting), that the depletion of intracellular total folates might cause apoptotic cell death in neurons by "DNA-repair-associated" ATP depletion. They meant that there would be a futile cycle of DNA damage, in response to folate depletion and increases in the dUMP/dTMP ratio, and DNA repair and that the DNA repair would ultimately consume so much ATP as to lead to apoptotic cell death, such as in response to ischemic episodes or strokes that can cause a lot of DNA damage. Davies et al. (1998) argued that fructose-induced phosphate depletion in the liver had caused both the purine depletion, as evidenced by the elevations in serum uric acid, and the uridine export from the liver. I'm not suggesting that more is always going to be better, and those articles on the overlapping mechanisms governing the efflux of uric acid and inorganic phosphate, as discussed in recent postings, suggest that the metabolic cost or competitive inhibitory effects of excesses of intracellular inorganic phosphate could become significant, past a certain point, and derange the transport of organic anions other than uric acid, etc. Although the research suggests that a lot of phosphate would be required to create that type of state, it's worthwhile to discuss these things with one's doctor.
My view is that the data on the dosages of phosphate used in people with XLHR (and in other genetic disorders that reduce phosphate reabsorption) is relevant to normal humans, with regard to the risk of nephrocalcinosis, but I can think of a number of possible objections to that view. The first would be that, in people with XLHR, the rate of phosphate (Pi) reabsorption would be lower than it would in normal people and that that would decrease the risk, in comparison to normal people, of intracellular calcium phosphate precipitation. I should mention that, in that long review on nephrocalcinosis that I recently discussed, the author noted that calcification can occur either extracellularly (and "luminally" or intraluminally), on the luminal membranes of the proximal or distal tubule cells or in the interstices of the tight junctions, or intracellularly, in the cells of the renal tubules. Thus, one could argue that normal people would have the same risk of intraluminal calcification as people who have XLHR would but that normal people would have a higher risk of intracellular calcification, in response to a given dosage of supplemental phosphate, as people who have XLHR would. In normal people, however, the proximal tubules are able to vary the percent reabsorption to between something like 80 and 99 percent, and that means there would be a lot of potential for the proximal tubules to increase the urinary excretion of phosphate in response to some dosage of supplemental Pi. I just think the risks are basically similar for normal people as they are for people who have XLHR.
The reason I'm focusing on the reabsorption is that XLHR doesn't affect the glomerular filtration of serum phosphate, except to the extent that the cells of people with XLHR might be more "hungry" for phosphate and might clear the serum phosphate, from a dosage of phosphate, more rapidly than a normal person's cells might (thereby producing an indirect decrease in the amounts of phosphate filtered per unit time). In other words, the acute elevations in serum phosphate could conceivably be larger in normal people than in people who have XLHR. But that presupposes that a person has no capacity to tell if some change in his or her phosphate to calcium intake ratio, for example, is producing any benefit. If there's no obvious benefit, presumably there wouldn't be an impetus to continue taking any reasonable amount of phosphate, with the approval of one's doctor. It would also, obviously, be important to spread the dosage out across the day as much as possible and to consider limiting any dosage of supplemental vitamin D to 2000-4000 IU or less, given that hypercalciuria is thought to be a major factor that can increase the risk of nephrocalcinosis. Some authors have suggested splitting the total daily dosage of phosphate, in people who have XLHR, into 8 dosages, spread out across the day, instead of the usual practice of splitting the dosage into 4-5 increments. Another objection I can think of would be that the PHEX protein or the Na(+)/Pi cotransporter might be expressed in myocytes or myogenic satellite cells or some other extrarenal cell type. That could mean that mtDNA replication or some other Pi-sensitive metabolic process would be specifically affected in the muscles and would not be likely to show up in normal people. But I don't see how a pure and severe case of Fanconi's syndrome couldn't produce the same kinds of long-term problems in postmitotic cell types as something like XLHR can. It probably wouldn't take 7 years to treat the problem in a normal person, but I just think that there's a need to think of this type of thing with the long view in mind. It's necessary for someone to do long-term safety research using supplemental phosphate in normal people and to use reasonable amounts of dietary calcium, etc. Or someone could do that type of research in people who have chronic fatigue syndrome. I don't know what the best approach would be. One could argue that reasonable and low dosages of phosphate would improve both purine and pyrimidine salvage and could help limit something like the age-associated reductions in mtDNA copy number in different cell types. These are just my off-the-cuff thoughts, but I think the notion that 7 days of "phosphate loading" is enough to make anyone "A-okay," in view of the mechanisms by which both the purine and pyrimidine ribonucleotide pools could become depleted intracellularly, for example, doesn't make a whole lot of sense to me. If the intracellular phosphate depletion is brief, then it makes sense to me that a brief period of time would be required to correct that depletion. But one is not even going to be able to tell if the intracellular phosphate levels are being maintained in some cases, given the frequently-observed independence of the intracellular and extracellular phosphate concentrations. So someone would have to do muscle biopsies or use 31P-MRS intermittently or measure red blood cell 2,3-diphosphoglycerate levels as a surrogate for the measurement of the intracellular Pi levels in myocytes, etc.
Tuesday, September 8, 2009
Potential for Competition Among Phosphate, Uric Acid (Urate), and Antivirals Used to Treat Influenza for Transport by Organic Anion Transporters
The authors of this article [Yabuuchi et al., 1998: (http://jpet.aspetjournals.org/cgi/reprint/286/3/1391)(http://www.ncbi.nlm.nih.gov/pubmed/9732402?dopt=Abstract)] describe the capacity of the type I Na(+)/Pi cotransporter (NPT1), a sodium and inorganic phosphate (Pi) transporter, to transport either organic anions, including probenecid, or inorganic phosphate (Pi) out of the liver and into the blood. Yabuuchi et al. (1998) noted that probenecid can compete with Pi for transport by NPT1, and this could conceivably mean that a higher intake of Pi might inhibit the efflux of uric acid (urate, UA), an organic anion whose reabsorption by proximal tubule epithelial cells can be inhibited by probenecid (http://scholar.google.com/scholar?hl=en&q=urate+probenecid), from the liver or otherwise influence the efflux or uptake of urate or xanthine by cells in the liver or kidneys, etc. (http://scholar.google.com/scholar?hl=en&q=%22inorganic+phosphate%22+anion+transporter). It's also conceivable that increases in extracellular or, in a more likely event, intracellular Pi could slow the elimination of antiviral drugs used to treat influenza. For example, Oo et al. (2002) [Oo et al., 2002: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=127254&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/12019123)] noted that the active metabolites of some neuraminidase inhibitors are mostly excreted unchanged, such as through their uptake by the proximal tubule cells, from the peritubular capillaries, and efflux across the luminal (apical) membranes of proximal tubule epithelial cells into the tubular fluid. Karie et al. (2006) [Karie et al., 2006: (http://ndt.oxfordjournals.org/cgi/reprint/21/12/3606.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/16799172)] noted that some neuraminidase inhibitors do not serve as substrates for cytochrome P450 oxidoreductases in the liver and do not inhibit those enzymes either, and that's a major reason that their active metabolites are mostly excreted unchanged by renal tubular excretion. Probenecid competes with some of these These interactions would probably not be very likely and would be most likely to occur, if at all, in people whose kidney function has already been diminished, as a result of age or other factors. This is a hastily-chosen article that describes the capacity of probenecid to inhibit the transport and, hence, renal excretion of some neuraminidase inhibitors or their active metabolites [(http://www.cdc.gov/Mmwr/preview/mmwrhtml/rr4814a1.htm); (http://scholar.google.com/scholar?q=probenecid+neuraminidase+inhibitor&hl=en)], and that basically means that oral purines or phosphate supplementation could conceivably slow the elimination of some neuraminidase inhibitors, and that wouldn't necessarily be desirable. It might sound good, and some people have proposed the use of probenecid to allow for the use of neuraminidase inhibitors at lower dosages (thereby allowing more people to be treated with antivirals, in the event of a "1970's-style shortage" of antivirals). But that could be a dangerous approach, given that the movement and constant efflux of some neuraminidase inhibitors is necessary to prevent the potentially problematic effects of their accumulation intracellularly, in cells in the liver or kidneys.
Thus, if one were taking an antiviral to treat an influenza infection and also taking some oral purine compound or source of inorganic phosphate (Pi), one might need to reduce the dosages of those or, as discussed by Karie et al. (2006), reduce the dosages of the antivirals. It would seem that reducing the dosage of the antiviral would not be the better approach, in theory, but one would obviously want to discuss this with one's doctor. Some of the major old M2 protein inhibitors, used as antivirals in the treatment of influenza, are derivatives of 1-aminoadamantane and are therefore also excreted unchanged. Aminoadamantane derivatives are apparently transported by organic cation transporters and would seem to not compete with UA or phosphate, but probenecid is a weak base and can sometimes inhibit the transport of substrates of organic cation transporters (http://scholar.google.com/scholar?hl=en&q=probenecid+aminoadamantane). There are strange ways in which substrates of organic cation transporters can influence the transport of other substrates (drugs or physiological compounds) of organic anion transporters [Khamdang et al., 2002: (http://jpet.aspetjournals.org/cgi/content/full/303/2/534)(http://www.ncbi.nlm.nih.gov/pubmed/12388633?dopt=Abstract)], maybe because they, like probenecid, are weak bases and could either be protonated or deprotonated or because they contain more than one ionizable group. There can be pH extremes and variations in the tubular fluid, for example, and there could be indirect interactions. An increase in the reabsorption of UA could, for example, be pH dependent and thereby produce an indirect, pH-sensitive reduction in the excretion of a drug that UA, by its binding to an efflux transporter intracellularly, in proximal tubule cells, and relative failure to serve as a substrate for transport by that transporter, competes with for transport, etc.
Another implication is that increases in the intracellular Pi concentration could reduce the loss of purine nucleotides both by inhibiting adenosine deaminase (and by activating adenosine kinase, arguably) and by reducing the efflux of cAMP or cGMP or other purine substrates of some organic anion transporters or multidrug resistance proteins that transport purines out of cells. This might mean that phosphate could, apart from its role in promoting normal purine salvage, serve as a dose-reducing agent for oral purines, such as ATP disodium, even in the absence of an influenza infection, obviously. But that's more theoretical, and these are just my opinions. Obviously, other medications, including but not limited to some antibiotics, are transported by organic anion transporters, too, and that's another reason one should discuss this type of thing with one's doctor.
Thus, if one were taking an antiviral to treat an influenza infection and also taking some oral purine compound or source of inorganic phosphate (Pi), one might need to reduce the dosages of those or, as discussed by Karie et al. (2006), reduce the dosages of the antivirals. It would seem that reducing the dosage of the antiviral would not be the better approach, in theory, but one would obviously want to discuss this with one's doctor. Some of the major old M2 protein inhibitors, used as antivirals in the treatment of influenza, are derivatives of 1-aminoadamantane and are therefore also excreted unchanged. Aminoadamantane derivatives are apparently transported by organic cation transporters and would seem to not compete with UA or phosphate, but probenecid is a weak base and can sometimes inhibit the transport of substrates of organic cation transporters (http://scholar.google.com/scholar?hl=en&q=probenecid+aminoadamantane). There are strange ways in which substrates of organic cation transporters can influence the transport of other substrates (drugs or physiological compounds) of organic anion transporters [Khamdang et al., 2002: (http://jpet.aspetjournals.org/cgi/content/full/303/2/534)(http://www.ncbi.nlm.nih.gov/pubmed/12388633?dopt=Abstract)], maybe because they, like probenecid, are weak bases and could either be protonated or deprotonated or because they contain more than one ionizable group. There can be pH extremes and variations in the tubular fluid, for example, and there could be indirect interactions. An increase in the reabsorption of UA could, for example, be pH dependent and thereby produce an indirect, pH-sensitive reduction in the excretion of a drug that UA, by its binding to an efflux transporter intracellularly, in proximal tubule cells, and relative failure to serve as a substrate for transport by that transporter, competes with for transport, etc.
Another implication is that increases in the intracellular Pi concentration could reduce the loss of purine nucleotides both by inhibiting adenosine deaminase (and by activating adenosine kinase, arguably) and by reducing the efflux of cAMP or cGMP or other purine substrates of some organic anion transporters or multidrug resistance proteins that transport purines out of cells. This might mean that phosphate could, apart from its role in promoting normal purine salvage, serve as a dose-reducing agent for oral purines, such as ATP disodium, even in the absence of an influenza infection, obviously. But that's more theoretical, and these are just my opinions. Obviously, other medications, including but not limited to some antibiotics, are transported by organic anion transporters, too, and that's another reason one should discuss this type of thing with one's doctor.
Subscribe to:
Comments (Atom)