I guess one thing I was thinking, in that last posting, was that an intracellular total folate level in the neurons in the hippocampus, say, could be relatively constant and "normal," across two groups of people between ages 60 and 80 (with one group going on to develop Alzheimer's), and still be interacting with other individual factors (lifestyle or genetic, etc.) in ways that would be contributing to the development of Alzheimer's. Just because two people have identical, marginally-adequate, intracellular total folate concentrations in their hippocampal neurons doesn't mean one person won't be affected more by that state of marginal adequacy. And I'm more talking about the need for more robust interventions, such as in using bioavailable, reduced folates, than about looking for associations (I realize that some of these differences could average out across large groups of people in a study looking at associations).
I realize that one only looks at a single variable, such as serum folate, and looks for an association in a large study. But my point is that there are all of these flawed or false assumptions about the ways in which tissue-specific concentrations of intracellular folates correlate with serum folate, etc., and these false assumptions can lead to erroneous conclusions.
Wednesday, December 31, 2008
Link to a Discussion of Folate and Alzheimer's Disease
Here's a link to a piece on folic acid in relation to Alzheimer's. The article mentioning the possibility that physical exercise could produce higher serum folate levels sounds interesting, and it's a plausible and interesting way of explaining some of the variation in risk. I'll try to post the link to that article showing iron deficiency in animals lowered serum folate, and those types of things imply that the liver's ability, for example, to maintain the folate cycle (to export 5-methyltetrahydrofolate to the blood, essentially) by maintaining the cellular redox state may be impaired in a person who is sedentary and has poor insulin sensitivity, etc. The impact of iron would suggest that it's something to do with an improvement in mitochondrial functioning being a factor causing the folate cycle to be more normalized (in response to amelioration of Fe deficiency or exercise, given that both can improve the redox state). Iron deficiency has been shown to reduce mitochondrial complex I and complex IV activity in different tissues, etc. I'll look at the article they refer to. This blog piece has a quote from one of the researchers who was involved in that study:
http://www.tangledneuron.info/the_tangled_neuron/2007/07/folate-folic-ac.html
In a lot of the studies looking at serum folate in relation to disease risk factors, I think there's still a tendency to think of things in terms of deficiency vs. sufficiency of serum folate, etc. I know it's necessary to look at serum folate levels in large trials, but there can be depletion of cerebrospinal fluid MTHF and MTHF-responsive neurological symptoms in the presence of normal serum folate levels [as discussed in this article: (http://www.ncbi.nlm.nih.gov/pubmed/16365882)]. The disturbing thing about that article is that "cerebral folate deficiency" is not a single genetic disorder (though some cases apparently can be caused by mtDNA heteroplasmy) and can apparently be an "acquired" condition (and be due to autoantibodies to folate transporters, etc.). Before I read that article, I'd thought that it was a single genetic disorder or cluster of single-gene disorders that could produce a given phenotypic manifestation (low CSF MTHF and neurological abnormalities that are responsive to exogenous MTHF). This is something similar: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=3783183).
This article talks about the way an unknown percentage of serum folate, above 50 nM, is folic acid and not MTHF: (http://www3.interscience.wiley.com/journal/118671262/abstract). That article, in spite of some cautionary statements at the end of it, does provide support for the use of methylfolate. Apparently the mucosal epithelial cells in the intestinal tract can only reduce and methylate a few hundred micrograms of a given dose of folic acid (it requires either 2 or 4--I forget which--enzymatic cycles of dihydrofolate reductase to convert a single molecule of folic acid into tetrahydrofolate, for example--I think it's four). Those are big pieces of information. But a big thing in my mind is the extreme variability in serum folate responses, between studies, to different doses of folic acid. My old folic acid paper is not perfect, but I list some of the serum folate responses to given doses of folic acid. They're very inconsistent.
I doubt that the tissue folate levels are kept "constant" to any extent, as some articles suggest they are, in the face of variations in serum folate. This article (http://www.ncbi.nlm.nih.gov/pubmed/3461471) discusses, on the first page, the fact that the intracellular folate levels tend to be about 3 orders of magnitude (1,000) times the serum folate levels, and that's obviously true that cells concentrate and accumulate intracellular folates (THF, 5-MTHF, etc.) in the presence of a fixed, extracellular concentration of folic acid or MTHF or folinic acid. But then the authors make the statement that the intracellular concentration remains fixed in response to 20-40-fold increases in serum folate. This statement may have some degree of truth, if one uses the 10-20 nM starting point that the authors use (the "normal" serum folate levels). In that case, an increase from 20 to 400 or 800 nM serum folate may not increase the intracellular total folate concentration much in, say, astrocytes in the hippocampus. But does the absence of a major increase in the intracellular total folate concentration in an extrahepatic cell mean that the initial concentration (that failed to increase) was "normal" or "desirable" or was the 20 uM figure that is assumed to exist in every cell? It doesn't, but another issue is that 20 times 20 nM is still a fairly low serum folate level, at least from the standpoint of concentration-dependent effects of folate in cell culture studies. In this article (Karen Brown et al.), an extracellular folate level of 20 nM produces intracellular total folate levels that are much, much lower than those that occur in response to an extracellular folate level of 9.3 uM (when one does the calculation from ng/g protein or whatever, using the data, it works out to like 45 uM or something, I think, intracellularly, under the 9.3 uM condition and much less in the cells in the 20 nM cultures). And the cellular differentiation state and proliferation was shown to be much more robust at 9.3 uM extracellular folate than at 20 nM:
http://www.ncbi.nlm.nih.gov/pubmed/16469322
The authors also discuss the "large capacity" of the cells to accomodate extra folate coenzymes. One can say that the 9.3 uM concentration is very high and supraphysiological, but that's a separate issue. The point is that cells in the brain probably cannot regulate their intracellular folate levels in a "pristine" and rigid and predictable manner in response to changes in serum folate, and who's to say what the best intracellular total folate concentration is. A computational study showed that the folate cycle starts to fall apart at intracellular folate levels below 5 uM, but there are lots of issues to consider.
http://www.tangledneuron.info/the_tangled_neuron/2007/07/folate-folic-ac.html
In a lot of the studies looking at serum folate in relation to disease risk factors, I think there's still a tendency to think of things in terms of deficiency vs. sufficiency of serum folate, etc. I know it's necessary to look at serum folate levels in large trials, but there can be depletion of cerebrospinal fluid MTHF and MTHF-responsive neurological symptoms in the presence of normal serum folate levels [as discussed in this article: (http://www.ncbi.nlm.nih.gov/pubmed/16365882)]. The disturbing thing about that article is that "cerebral folate deficiency" is not a single genetic disorder (though some cases apparently can be caused by mtDNA heteroplasmy) and can apparently be an "acquired" condition (and be due to autoantibodies to folate transporters, etc.). Before I read that article, I'd thought that it was a single genetic disorder or cluster of single-gene disorders that could produce a given phenotypic manifestation (low CSF MTHF and neurological abnormalities that are responsive to exogenous MTHF). This is something similar: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=3783183).
This article talks about the way an unknown percentage of serum folate, above 50 nM, is folic acid and not MTHF: (http://www3.interscience.wiley.com/journal/118671262/abstract). That article, in spite of some cautionary statements at the end of it, does provide support for the use of methylfolate. Apparently the mucosal epithelial cells in the intestinal tract can only reduce and methylate a few hundred micrograms of a given dose of folic acid (it requires either 2 or 4--I forget which--enzymatic cycles of dihydrofolate reductase to convert a single molecule of folic acid into tetrahydrofolate, for example--I think it's four). Those are big pieces of information. But a big thing in my mind is the extreme variability in serum folate responses, between studies, to different doses of folic acid. My old folic acid paper is not perfect, but I list some of the serum folate responses to given doses of folic acid. They're very inconsistent.
I doubt that the tissue folate levels are kept "constant" to any extent, as some articles suggest they are, in the face of variations in serum folate. This article (http://www.ncbi.nlm.nih.gov/pubmed/3461471) discusses, on the first page, the fact that the intracellular folate levels tend to be about 3 orders of magnitude (1,000) times the serum folate levels, and that's obviously true that cells concentrate and accumulate intracellular folates (THF, 5-MTHF, etc.) in the presence of a fixed, extracellular concentration of folic acid or MTHF or folinic acid. But then the authors make the statement that the intracellular concentration remains fixed in response to 20-40-fold increases in serum folate. This statement may have some degree of truth, if one uses the 10-20 nM starting point that the authors use (the "normal" serum folate levels). In that case, an increase from 20 to 400 or 800 nM serum folate may not increase the intracellular total folate concentration much in, say, astrocytes in the hippocampus. But does the absence of a major increase in the intracellular total folate concentration in an extrahepatic cell mean that the initial concentration (that failed to increase) was "normal" or "desirable" or was the 20 uM figure that is assumed to exist in every cell? It doesn't, but another issue is that 20 times 20 nM is still a fairly low serum folate level, at least from the standpoint of concentration-dependent effects of folate in cell culture studies. In this article (Karen Brown et al.), an extracellular folate level of 20 nM produces intracellular total folate levels that are much, much lower than those that occur in response to an extracellular folate level of 9.3 uM (when one does the calculation from ng/g protein or whatever, using the data, it works out to like 45 uM or something, I think, intracellularly, under the 9.3 uM condition and much less in the cells in the 20 nM cultures). And the cellular differentiation state and proliferation was shown to be much more robust at 9.3 uM extracellular folate than at 20 nM:
http://www.ncbi.nlm.nih.gov/pubmed/16469322
The authors also discuss the "large capacity" of the cells to accomodate extra folate coenzymes. One can say that the 9.3 uM concentration is very high and supraphysiological, but that's a separate issue. The point is that cells in the brain probably cannot regulate their intracellular folate levels in a "pristine" and rigid and predictable manner in response to changes in serum folate, and who's to say what the best intracellular total folate concentration is. A computational study showed that the folate cycle starts to fall apart at intracellular folate levels below 5 uM, but there are lots of issues to consider.
Correction
I corrected the small error I'd made in my posting on the conversion of nmol/g protein into nM. I mistakenly put ng/g as the starting units, but I meant to put nmol/g. When I'm not doing math regularly or don't check my work adequately, I tend to make small errors like that. That's part of the reason I type out some of the conversions, in a slightly-moronic way. That helps me avoid making silly errors.
Note on Purines in Nonhuman Species; Neuroprotective Effects of Purines
One thing that allows rodents and other species to tolerate larger doses of purines is that humans and a couple of monkey species, I think, (this article confounds that issue: http://www.ncbi.nlm.nih.gov/pubmed/3928241) are the only species that have no urate oxidase activity. In humans, urate can degraded to allantoin in a series of successive, nonenzymatic nitration or nitrosylation reactions, but effectively the same reaction is done enzymatically in most other species. People have suggested the loss of urate oxidase may have contributed to cognitive development in hominids or something, but that's not my area. I can't find the references quickly.
The thing is, though, purines have really powerful neuroprotective and neurotrophic effects, and they have potential uses in treating brain injuries. I know one pharmaceutical company is testing an intravenous inosine preparation in treating either strokes or some sort of brain injuries, and that would be the way to do it. But there are many articles showing that oral guanosine or guanosine monophosphate have fairly significant effects on the brain, at remarkably low doses. Also, it's not true that oral purines are all degraded into uric acid in the intestinal tract. Researchers would have to consider the pharmacokinetic aspects, though, and also use forms of purines that are actually soluble (the disodium salts of the monophosphates are soluble, and the other forms have much more limited solubility). Here's one reference showing that oral inosine is absorbed intact and elevates plasma hypoxanthine and, nonsignificantly, inosine (and also xanthine): (http://www.ncbi.nlm.nih.gov/pubmed/11912550). There's a lot of animal research showing neuroprotective effects of inosine, but I'm forgetting what the status of the human research is. Here are some examples of neuroprotective effects of inosine (this will help me collect some of these):
http://atvb.ahajournals.org/cgi/content/full/25/9/1998
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/15976325?dopt=Abstract)
http://www.inotekcorp.com/publications/pdf/ipcpub306.pdf
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/15019271)
http://www.iovs.org/cgi/content/full/45/2/662
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/14744912?dopt=Abstract)
http://stroke.ahajournals.org/cgi/content/full/36/3/654
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/15692110?dopt=Abstract)
http://www.ncbi.nlm.nih.gov/pubmed/15146976 (tests guanosine and adenosine also)
There are tons of them, but here's a long and frequently-cited review on the neurotrophic effects of purines:
http://www.ncbi.nlm.nih.gov/pubmed/10845757
There is this concentration issue, the fact that some studies in vitro use very high concentrations, but the effects in vivo, in animals, are really significant and suggest that it would be possible to use purines over the longer-term, at lower dosages, to treat traumatic brain injuries, to enhance recovery after strokes, etc. The issues with the interactions of purines with other aspects of metabolism, such as mitochondrial DNA turnover and so on, are really interesting to me.
The thing is, though, purines have really powerful neuroprotective and neurotrophic effects, and they have potential uses in treating brain injuries. I know one pharmaceutical company is testing an intravenous inosine preparation in treating either strokes or some sort of brain injuries, and that would be the way to do it. But there are many articles showing that oral guanosine or guanosine monophosphate have fairly significant effects on the brain, at remarkably low doses. Also, it's not true that oral purines are all degraded into uric acid in the intestinal tract. Researchers would have to consider the pharmacokinetic aspects, though, and also use forms of purines that are actually soluble (the disodium salts of the monophosphates are soluble, and the other forms have much more limited solubility). Here's one reference showing that oral inosine is absorbed intact and elevates plasma hypoxanthine and, nonsignificantly, inosine (and also xanthine): (http://www.ncbi.nlm.nih.gov/pubmed/11912550). There's a lot of animal research showing neuroprotective effects of inosine, but I'm forgetting what the status of the human research is. Here are some examples of neuroprotective effects of inosine (this will help me collect some of these):
http://atvb.ahajournals.org/cgi/content/full/25/9/1998
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/15976325?dopt=Abstract)
http://www.inotekcorp.com/publications/pdf/ipcpub306.pdf
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/15019271)
http://www.iovs.org/cgi/content/full/45/2/662
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/14744912?dopt=Abstract)
http://stroke.ahajournals.org/cgi/content/full/36/3/654
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/15692110?dopt=Abstract)
http://www.ncbi.nlm.nih.gov/pubmed/15146976 (tests guanosine and adenosine also)
There are tons of them, but here's a long and frequently-cited review on the neurotrophic effects of purines:
http://www.ncbi.nlm.nih.gov/pubmed/10845757
There is this concentration issue, the fact that some studies in vitro use very high concentrations, but the effects in vivo, in animals, are really significant and suggest that it would be possible to use purines over the longer-term, at lower dosages, to treat traumatic brain injuries, to enhance recovery after strokes, etc. The issues with the interactions of purines with other aspects of metabolism, such as mitochondrial DNA turnover and so on, are really interesting to me.
Sample Scaling Calculation for Article on Nucleotides
This is one article (Tzu-Hsiu Chen et al.) showing memory-enhancing effects from dietary nucleotides. The first one is in mice and uses a mixture of nucleotides (NT's) at 0.5 % of the diet, meaning 5 g NT's/kg diet:
http://www.ncbi.nlm.nih.gov/pubmed/8937510
Here are the converted dosages (using my conversion factors: http://hardcorephysiologyfun.blogspot.com/2008/12/equations-for-animal-food-intake-and.html):
(23 g inosine/100 g NT's) x (5 g NT's mixture/1 kg diet) x (1,000 mg inosine/1 g inosine) x (0.150 kg diet consumed/kg bw mouse) = 172.5 mg inosine/kg bw/d
(35 g guanosine 5'-monophosphate disodium) x (50) x (0.150) = 262.5 mg GMP Na2/kg bw/d
(21 g cytidine) x (50) x (0.150) = 157.5 mg cytidine/kg bw/d
120 mg uridine/kg bw/d
37.5 mg thymidine/kg/d
If I use the first scaling factor (5.79), to scale it crudely to humans (I'm using the 70 kg figure, because I already used it):
29.8 mg/kg inosine (2,086 mg/d)
45.3 mg/kg GMP Na2 (3,171 mg/d)
27.2 mg/kg cytidine (1,971 mg/d)
20.7 mg/kg uridine (1,441 mg/d)
6.5 mg/kg thymidine (455 mg/d)
I need a new scaling factor for mice, I think, because the purine dosages are really too high. That would probably cause hyperuricemia in humans. I think the limit of total purines (from IMP, AMP or ATP, and GMP combined) would be like 3 grams or 4 maybe, because guanosine and inosine have been shown to elevate uric acid substantially in humans. The pyrimidine dosages are not unreasonable at all. Anyway, I'm not suggesting anyone apply this, but I'm trying to get a sense of what types of dosages produced effects in animals.
http://www.ncbi.nlm.nih.gov/pubmed/8937510
Here are the converted dosages (using my conversion factors: http://hardcorephysiologyfun.blogspot.com/2008/12/equations-for-animal-food-intake-and.html):
(23 g inosine/100 g NT's) x (5 g NT's mixture/1 kg diet) x (1,000 mg inosine/1 g inosine) x (0.150 kg diet consumed/kg bw mouse) = 172.5 mg inosine/kg bw/d
(35 g guanosine 5'-monophosphate disodium) x (50) x (0.150) = 262.5 mg GMP Na2/kg bw/d
(21 g cytidine) x (50) x (0.150) = 157.5 mg cytidine/kg bw/d
120 mg uridine/kg bw/d
37.5 mg thymidine/kg/d
If I use the first scaling factor (5.79), to scale it crudely to humans (I'm using the 70 kg figure, because I already used it):
29.8 mg/kg inosine (2,086 mg/d)
45.3 mg/kg GMP Na2 (3,171 mg/d)
27.2 mg/kg cytidine (1,971 mg/d)
20.7 mg/kg uridine (1,441 mg/d)
6.5 mg/kg thymidine (455 mg/d)
I need a new scaling factor for mice, I think, because the purine dosages are really too high. That would probably cause hyperuricemia in humans. I think the limit of total purines (from IMP, AMP or ATP, and GMP combined) would be like 3 grams or 4 maybe, because guanosine and inosine have been shown to elevate uric acid substantially in humans. The pyrimidine dosages are not unreasonable at all. Anyway, I'm not suggesting anyone apply this, but I'm trying to get a sense of what types of dosages produced effects in animals.
Tuesday, December 30, 2008
Cell Biology Conversion Factors for nmol/g Wet Weight to Intracellular Molarity Conversions
Even though one might argue that this lends a false sense of quantitative objectivity to an evaluation of an article or series of articles, these conversions are fairly basic. Researchers sometimes do these conversions to get a crude sense of the intracellular concentrations, based on data expressed in nmol of substance per g tissue or per 10^6 cells, etc.
Intracellular Water Per Gram of Wet Weight of Tissue:
Yamada et al. (2000) [Kazuhiro Yamada et al., 2000: (http://jn.nutrition.org/cgi/content/full/130/8/1894) (http://www.ncbi.nlm.nih.gov/pubmed/10917899?dopt=Abstract)] includes a cited value for the cytosolic volume (~intracellular water) as 0.4 mL/g fresh weight (wet weight) of the liver.
Kimoto et al. (2001) [Tetsuya Kimoto et al., 2001: (http://endo.endojournals.org/cgi/content/full/142/8/3578) (http://www.ncbi.nlm.nih.gov/pubmed/11459806?dopt=Abstract)] estimated that hippocampal tissue contained 0.7-0.8 mL intracellular water/g ww.
Fatouros and Marmarou (1999) [Fatouros and Marmarou, 1999: (http://www.jnsonline.org/jns/issues/v90n1/pdf/n0900109.pdf) (http://www.ncbi.nlm.nih.gov/pubmed/10413163)] found that the average water content of white matter and gray matter were 0.68 mL/g ww and 0.8 mL/g ww. These measurements encompass both the intracellular and extracellular fluid water contents, however. Fatrouros et al. (1999) excluded the volume of water in the cisterns, etc., and argued that "most" of the water in the gray matter is intracellular but that about 15 percent of the water in the white matter is extracellular. Some of the values for the gray matter, listed in table 3, are, however, between 0.75 and 0.8 mL/g ww.
Aliev et al. (2002) [Aliev et al., 2002: (http://www.ncbi.nlm.nih.gov/pubmed/11744012)] estimated the intracellular water content of the rat heart to be 0.615 mL/g wet mass. They also estimated that the extracellular water content is about .174 mL/g wet mass (which is part interstitial fluid and part water in blood vessels, in blood).
Cellular Protein Per Gram of Wet Weight of Tissue:
Bissell et al. (1973) [D. Montgomery Bissell et al., 1973: (http://jcb.rupress.org/cgi/content/abstract/59/3/722) (http://www.ncbi.nlm.nih.gov/pubmed/4357460?dopt=Abstract)] cites a value of 22 percent cellular protein for whole liver (0.22 g protein/g ww liver).
Maia et al. (2005) [Ana Luiza Maia et al., 2005: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1190373)] give values of 150 g cellular protein/1500 g wet weight (0.1 g protein/g ww) of liver and 2,240 g cellular protein/28,000 g wet weight of muscle (0.08 g protein/g ww).
Kimoto et al. (2001) [Tetsuya Kimoto et al., 2001: (http://endo.endojournals.org/cgi/content/full/142/8/3578) (http://www.ncbi.nlm.nih.gov/pubmed/11459806?dopt=Abstract)] found that hippocampal tissue contained 0.96 (+/- 0.02) mg protein/10 mg wet weight (0.094-0.098 mg protein/mg ww).
Ratio of Wet Weight to Dry Weight of Tissue:
This article (Wimmer et al., 1985) [Wimmer et al., 1985: (http://www.ncbi.nlm.nih.gov/pubmed/4086343)] found that the livers of male rats contained 3.33 (+/- 0.3) g wet weight/g dry weight of liver. They found that the livers of female rats contained 3.28 (+/- 0.24) g wet weight/g dry weight liver. The authors recommended that the converstion factor of 3.3 g ww/g dw be used in general.
This article (Ronglih Liao et al.) has some good references on these conversions:
http://circres.ahajournals.org/cgi/content/full/78/5/893#R34
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/8620610?dopt=Abstract)
0.125 (+/- 0.004) mg Lowry protein/mg wet weight heart for control white turkeys (turkey "poults")
0.087 (+/- 0.003) mg Lowry protein/mg ww for hearts of turkeys with cardiomyopathy
Cited assumption for intracellular (cytosolic) water volume in myocytes of control animals and animals with dilated cardiomyopathy: 0.5 mL intracellular water/g wet weight of heart tissue.
In the turkey heart: 0.218 = (g dry weight/g wet weight), for both failing and healthy hearts, or 4.59 g ww/g dry weight.
[Note: the Lowry protein content is an estimate of the intracellular protein content (the intracellular proteins have a higher content of aromatic amino acid residues) and excludes the influence of extracellular proteins on the measurement]
Here's another one (Jie Shang et al.) that uses a 0.5 g protein/mL intracellular water for "packed fibroblasts" in culture:
http://www.ncbi.nlm.nih.gov/pubmed/14729664?dopt=Abstract
This article (David Cichowicz et al.) used a conversion of 18.1 nmol/g ww for the liver of intracellular total folates and calculated an intracellular total folates concentration of 25 uM. I won't extrapolate their conversion factor now, but there it is:
http://www.ncbi.nlm.nih.gov/pubmed/3828321
This article (William Strong and coauthor) uses the assumption that there's 0.7 mL intracellular water/g wet weight of rabbit liver:
http://www.ncbi.nlm.nih.gov/pubmed/2514800
Lund and Wiggins (1987) [Lund and Wiggins, 1987: (http://www.ncbi.nlm.nih.gov/pubmed/3620602)] cites and calculates multiple values for the cytosolic water volume per g ww of liver, and some of the numbers are 0.489 mL/g ww, 0.526 mL/g ww, and 0.55 mL/g ww. The 0.4 mL/g ww is seeming, more and more, to be an unusually low value. I'm going to increase the standard conversion factor I use to reflect the use of 0.7 mL/g ww tissue in different cell types. The Cichowicz conversion used 0.72 mL/g ww, Strong et al. used 0.7, Liao cited a value of 0.5, Kimoto used 0.7-0.8 (let's say that's 0.75), and then 0.489 from Table 1 in Lund and Wiggins (1.86/3.8), 0.55 (cited on p.63 of Lund and Wiggins), and 0.4. The average is 0.514, but I'm going to use 0.615 as a value, from Aliev et al. (2002) (see "Summary" below).
That above article (Lund and Wiggins, 1987) also includes these values for 1 mL intracellular water per cell numbers of liver parenchymal cells:
(4.15 x 10^8 cells/g dw) x (1 g dw/3.58 g ww) x (1 g ww/0.6 mL intracellular water) = 1.932 x 10^8 cells/mL intracellular water = 1.159 x 10^8 cells/g ww
They also cite a value of 4.34 x 10^8 cells/g dw, which would convert to:
(4.34E8) x (1/2.148) = 2.02 x 10^8 cells/mL intracellular water = 1.212 x 10^8 cells/g ww
This article uses a value of 1.9 x 10^8 human fibroblasts/mL intracellular water, which converts to 1.14 x 10^8 cells/g ww (using the 0.6 mL intracellular water/g ww value) [Foo et al., 1982: (http://jn.nutrition.org/cgi/content/abstract/112/8/1600) (http://www.ncbi.nlm.nih.gov/pubmed/7047695?dopt=Abstract)].
McDevitt et al. (2005) [Theresa McDevitt et al., 2005: (http://www.ncbi.nlm.nih.gov/pubmed/15671207)] used values of 0.909-1.25 mg cellular protein/10^6 cells (monocyte-macrophage-lineage, U937 cells):
(0.909 mg protein/10^6 cells) x (1 g ww/100 mg protein) x (0.6 mL intracellular water/1 g ww) = 0.00545 mL/10^6 cells = 1.83 x 10^8 cells/mL intracellular water
or, for the 1.25 value, the conversion gives 1.33 x 10^8 cells/mL intracellular water
I'll use 1.9 x 10^8 cells/mL intracellular water as a conversion factor (looking at the different results)
Culic et al. (1999) [Ognjen Culic et al., 1999: (http://ajpcell.physiology.org/cgi/content/full/276/5/C1061) (http://www.ncbi.nlm.nih.gov/pubmed/10329953)] used these conversion factors for porcine aortic endothelial cells and for the heart tissue overall:
1 mg cellular protein/5.4 x 10^6 cells (in porcine aortic endothelial cells) (gives 0.185 mg protein/10^6 cells)
10 ug triglycerides/1.1 x 10^(-8) mol triglycerides (average molecular weight of triglycerides assumed to be 900 g/mol)
17 ug triglycerides/mg protein in endothelial cells
140 mg protein/1 g of myocardial tissue
Summary:
To convert nmol/g wet weight to intracellular concentration in nM:
(Y nmol/g ww) x (1 g ww tissue/0.615 mL intracellular water) x (1000 mL intracellular water/1 L cytosolic water) = (Y) x (1626) = Z nM (across whole tissue).
To convert nmol substance Y/g cellular protein into nM, use the assumption that 10 percent of the wet weight of the tissue is protein (100 mg protein/g ww):
(Y nmol/g protein) x (0.10 g protein/g ww) x (1 g ww/0.615 mL intracellular water) x (1000 mL intracellular water/1 L intracellular water) = (Y) x (163) = Z nM (intracellular).
Basic conversion factors for folic acid and reduced folates:
1 nM serum folate = 2.265 ng/mL
Molar mass of folic acid: 441.4 g/mol
Molar mass of 5-formyltetrahydrofolate (5-CHO-THF): 473.44 g/mol
Molar mass of 10-formyltetrahydrofolate (10-CHO-THF): 473.44 g/mol
Molar mass of 5-methyltetrahydrofolate (MTHF): 459.46 g/mol
Intracellular Water Per Gram of Wet Weight of Tissue:
Yamada et al. (2000) [Kazuhiro Yamada et al., 2000: (http://jn.nutrition.org/cgi/content/full/130/8/1894) (http://www.ncbi.nlm.nih.gov/pubmed/10917899?dopt=Abstract)] includes a cited value for the cytosolic volume (~intracellular water) as 0.4 mL/g fresh weight (wet weight) of the liver.
Kimoto et al. (2001) [Tetsuya Kimoto et al., 2001: (http://endo.endojournals.org/cgi/content/full/142/8/3578) (http://www.ncbi.nlm.nih.gov/pubmed/11459806?dopt=Abstract)] estimated that hippocampal tissue contained 0.7-0.8 mL intracellular water/g ww.
Fatouros and Marmarou (1999) [Fatouros and Marmarou, 1999: (http://www.jnsonline.org/jns/issues/v90n1/pdf/n0900109.pdf) (http://www.ncbi.nlm.nih.gov/pubmed/10413163)] found that the average water content of white matter and gray matter were 0.68 mL/g ww and 0.8 mL/g ww. These measurements encompass both the intracellular and extracellular fluid water contents, however. Fatrouros et al. (1999) excluded the volume of water in the cisterns, etc., and argued that "most" of the water in the gray matter is intracellular but that about 15 percent of the water in the white matter is extracellular. Some of the values for the gray matter, listed in table 3, are, however, between 0.75 and 0.8 mL/g ww.
Aliev et al. (2002) [Aliev et al., 2002: (http://www.ncbi.nlm.nih.gov/pubmed/11744012)] estimated the intracellular water content of the rat heart to be 0.615 mL/g wet mass. They also estimated that the extracellular water content is about .174 mL/g wet mass (which is part interstitial fluid and part water in blood vessels, in blood).
Cellular Protein Per Gram of Wet Weight of Tissue:
Bissell et al. (1973) [D. Montgomery Bissell et al., 1973: (http://jcb.rupress.org/cgi/content/abstract/59/3/722) (http://www.ncbi.nlm.nih.gov/pubmed/4357460?dopt=Abstract)] cites a value of 22 percent cellular protein for whole liver (0.22 g protein/g ww liver).
Maia et al. (2005) [Ana Luiza Maia et al., 2005: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1190373)] give values of 150 g cellular protein/1500 g wet weight (0.1 g protein/g ww) of liver and 2,240 g cellular protein/28,000 g wet weight of muscle (0.08 g protein/g ww).
Kimoto et al. (2001) [Tetsuya Kimoto et al., 2001: (http://endo.endojournals.org/cgi/content/full/142/8/3578) (http://www.ncbi.nlm.nih.gov/pubmed/11459806?dopt=Abstract)] found that hippocampal tissue contained 0.96 (+/- 0.02) mg protein/10 mg wet weight (0.094-0.098 mg protein/mg ww).
Ratio of Wet Weight to Dry Weight of Tissue:
This article (Wimmer et al., 1985) [Wimmer et al., 1985: (http://www.ncbi.nlm.nih.gov/pubmed/4086343)] found that the livers of male rats contained 3.33 (+/- 0.3) g wet weight/g dry weight of liver. They found that the livers of female rats contained 3.28 (+/- 0.24) g wet weight/g dry weight liver. The authors recommended that the converstion factor of 3.3 g ww/g dw be used in general.
This article (Ronglih Liao et al.) has some good references on these conversions:
http://circres.ahajournals.org/cgi/content/full/78/5/893#R34
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/8620610?dopt=Abstract)
0.125 (+/- 0.004) mg Lowry protein/mg wet weight heart for control white turkeys (turkey "poults")
0.087 (+/- 0.003) mg Lowry protein/mg ww for hearts of turkeys with cardiomyopathy
Cited assumption for intracellular (cytosolic) water volume in myocytes of control animals and animals with dilated cardiomyopathy: 0.5 mL intracellular water/g wet weight of heart tissue.
In the turkey heart: 0.218 = (g dry weight/g wet weight), for both failing and healthy hearts, or 4.59 g ww/g dry weight.
[Note: the Lowry protein content is an estimate of the intracellular protein content (the intracellular proteins have a higher content of aromatic amino acid residues) and excludes the influence of extracellular proteins on the measurement]
Here's another one (Jie Shang et al.) that uses a 0.5 g protein/mL intracellular water for "packed fibroblasts" in culture:
http://www.ncbi.nlm.nih.gov/pubmed/14729664?dopt=Abstract
This article (David Cichowicz et al.) used a conversion of 18.1 nmol/g ww for the liver of intracellular total folates and calculated an intracellular total folates concentration of 25 uM. I won't extrapolate their conversion factor now, but there it is:
http://www.ncbi.nlm.nih.gov/pubmed/3828321
This article (William Strong and coauthor) uses the assumption that there's 0.7 mL intracellular water/g wet weight of rabbit liver:
http://www.ncbi.nlm.nih.gov/pubmed/2514800
Lund and Wiggins (1987) [Lund and Wiggins, 1987: (http://www.ncbi.nlm.nih.gov/pubmed/3620602)] cites and calculates multiple values for the cytosolic water volume per g ww of liver, and some of the numbers are 0.489 mL/g ww, 0.526 mL/g ww, and 0.55 mL/g ww. The 0.4 mL/g ww is seeming, more and more, to be an unusually low value. I'm going to increase the standard conversion factor I use to reflect the use of 0.7 mL/g ww tissue in different cell types. The Cichowicz conversion used 0.72 mL/g ww, Strong et al. used 0.7, Liao cited a value of 0.5, Kimoto used 0.7-0.8 (let's say that's 0.75), and then 0.489 from Table 1 in Lund and Wiggins (1.86/3.8), 0.55 (cited on p.63 of Lund and Wiggins), and 0.4. The average is 0.514, but I'm going to use 0.615 as a value, from Aliev et al. (2002) (see "Summary" below).
That above article (Lund and Wiggins, 1987) also includes these values for 1 mL intracellular water per cell numbers of liver parenchymal cells:
(4.15 x 10^8 cells/g dw) x (1 g dw/3.58 g ww) x (1 g ww/0.6 mL intracellular water) = 1.932 x 10^8 cells/mL intracellular water = 1.159 x 10^8 cells/g ww
They also cite a value of 4.34 x 10^8 cells/g dw, which would convert to:
(4.34E8) x (1/2.148) = 2.02 x 10^8 cells/mL intracellular water = 1.212 x 10^8 cells/g ww
This article uses a value of 1.9 x 10^8 human fibroblasts/mL intracellular water, which converts to 1.14 x 10^8 cells/g ww (using the 0.6 mL intracellular water/g ww value) [Foo et al., 1982: (http://jn.nutrition.org/cgi/content/abstract/112/8/1600) (http://www.ncbi.nlm.nih.gov/pubmed/7047695?dopt=Abstract)].
McDevitt et al. (2005) [Theresa McDevitt et al., 2005: (http://www.ncbi.nlm.nih.gov/pubmed/15671207)] used values of 0.909-1.25 mg cellular protein/10^6 cells (monocyte-macrophage-lineage, U937 cells):
(0.909 mg protein/10^6 cells) x (1 g ww/100 mg protein) x (0.6 mL intracellular water/1 g ww) = 0.00545 mL/10^6 cells = 1.83 x 10^8 cells/mL intracellular water
or, for the 1.25 value, the conversion gives 1.33 x 10^8 cells/mL intracellular water
I'll use 1.9 x 10^8 cells/mL intracellular water as a conversion factor (looking at the different results)
Culic et al. (1999) [Ognjen Culic et al., 1999: (http://ajpcell.physiology.org/cgi/content/full/276/5/C1061) (http://www.ncbi.nlm.nih.gov/pubmed/10329953)] used these conversion factors for porcine aortic endothelial cells and for the heart tissue overall:
1 mg cellular protein/5.4 x 10^6 cells (in porcine aortic endothelial cells) (gives 0.185 mg protein/10^6 cells)
10 ug triglycerides/1.1 x 10^(-8) mol triglycerides (average molecular weight of triglycerides assumed to be 900 g/mol)
17 ug triglycerides/mg protein in endothelial cells
140 mg protein/1 g of myocardial tissue
Summary:
To convert nmol/g wet weight to intracellular concentration in nM:
(Y nmol/g ww) x (1 g ww tissue/0.615 mL intracellular water) x (1000 mL intracellular water/1 L cytosolic water) = (Y) x (1626) = Z nM (across whole tissue).
To convert nmol substance Y/g cellular protein into nM, use the assumption that 10 percent of the wet weight of the tissue is protein (100 mg protein/g ww):
(Y nmol/g protein) x (0.10 g protein/g ww) x (1 g ww/0.615 mL intracellular water) x (1000 mL intracellular water/1 L intracellular water) = (Y) x (163) = Z nM (intracellular).
Basic conversion factors for folic acid and reduced folates:
1 nM serum folate = 2.265 ng/mL
Molar mass of folic acid: 441.4 g/mol
Molar mass of 5-formyltetrahydrofolate (5-CHO-THF): 473.44 g/mol
Molar mass of 10-formyltetrahydrofolate (10-CHO-THF): 473.44 g/mol
Molar mass of 5-methyltetrahydrofolate (MTHF): 459.46 g/mol
Multidrug-Resistance Proteins: Overlap of Transport of Folates and Purines; Relevance to Glutathione
This is an interesting article (by Hao Zeng et al.) showing that methotrexate and leucovorin/reduced folates can be transported by multidrug-resistance proteins (ATP-dependent, cellular efflux transporters). This could be relevant to an understanding of folate/purine interactions. It's possible that higher intracellular folate levels could limit purine efflux (purines can sometimes be substrates for MDR/MRP-mediated efflux).
http://cancerres.aacrjournals.org/cgi/content/full/61/19/7225
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11585759?dopt=Abstract)
Here's an article (by Jan Wijnholds et al.) showing that MRP5/mdr5 can transport purines out of cells and may cause glutathione (GSH) efflux (such as in response to an excess of intracellular purines?), given the potential for GSH cotransport with purines:
http://www.pnas.org/content/97/13/7476.full
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/10840050?dopt=Abstract)
That could be a concern with the use of higher doses of purines/dietary nucleotides in the treatment of liver diseases or traumatic brain injury. They also mentioned the similarity with the organic anion transporters, and urate (uric acid) is transported by organic anion transporters (and some urate is excreted into the bile). Purines could compete with bile acids for efflux into the bile and thereby, conceivably, cause liver issues at high doses.
A lot of drugs and physiological mediators could conceivably influence folate metabolism by influencing MDR/MRP expression, etc.
http://cancerres.aacrjournals.org/cgi/content/full/61/19/7225
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11585759?dopt=Abstract)
Here's an article (by Jan Wijnholds et al.) showing that MRP5/mdr5 can transport purines out of cells and may cause glutathione (GSH) efflux (such as in response to an excess of intracellular purines?), given the potential for GSH cotransport with purines:
http://www.pnas.org/content/97/13/7476.full
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/10840050?dopt=Abstract)
That could be a concern with the use of higher doses of purines/dietary nucleotides in the treatment of liver diseases or traumatic brain injury. They also mentioned the similarity with the organic anion transporters, and urate (uric acid) is transported by organic anion transporters (and some urate is excreted into the bile). Purines could compete with bile acids for efflux into the bile and thereby, conceivably, cause liver issues at high doses.
A lot of drugs and physiological mediators could conceivably influence folate metabolism by influencing MDR/MRP expression, etc.
Depletion of Reduced Folates From the Brain, Mimicking Wernicke's Encephalopathy (Thiamine Depletion)
This article, by Eg Lever and coauthors, is really interesting and shows another example of neurological symptoms (in this case, symptoms that are consistent with both Wernicke's encephalopathy and subacute combined degeneration) due to the depletion of reduced folates (methylfolate in particular) from the CSF and brain:
http://jnnp.bmj.com/cgi/content/abstract/49/10/1203
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/3783183?dopt=Abstract)
(full text: http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=3783183)
Given that there are all those articles on "cerebral folate deficiency," a disorder that can look like mitochondrial encephalopathies but that can result from many causes and be an acquired "disorder," (http://www.ncbi.nlm.nih.gov/pubmed/16365882) I'll bet neurological symptoms due to methylfolate depletion from the cerebrospinal fluid is more common than it's usually assumed to be.
That article by Lever et al. (1986) basically shows, in part, that cerebral folate depletion mimics Wernicke's encephalopathy, a depletion of thiamine (vitamin B1), from the brain, that usually occurs in heavy drinkers or people with a history of alcoholism. The authors talk about that and show that the patient actually showed a short-lived reticulocyte response to thiamine and vitamin B12. They cite some articles showing that folate repletion can actually increase thiamine transport into cells, and that's the opposite of the effect I would have predicted.
There's actually been a lot of research, since that 1986 article came out, on overlap between thiamine and reduced folate transport. This article, by Rongbao Zhao et al., shows that RFC1, one of the reduced folate carriers, can transport thiamine into cells:
http://ajpcell.physiology.org/cgi/content/full/282/6/C1512
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11997266?dopt=Abstract)
There are lots of articles on this SLC19 family of transporters, and some of the thiamine transporters also transport biotin. In people with some genetic disorders, derangements of biotin transport can cause this devastating neurological disease that affects the basal ganglia (biotin-responsive basal ganglia disease) (http://ajpcell.physiology.org/cgi/content/full/291/5/C851 and pubmed: http://www.ncbi.nlm.nih.gov/pubmed/16790503?dopt=Abstract). Those authors (Veedamali Subramanian and coauthors) found evidence that the biotin wasn't exerting its effects by bypassing a deficient activity of one of the thiamine transporters, I think. The article is complicated, and I'm not so much up for re-reading parts of it now.
These articles on the overlap between thiamine and folate transport do tend to be complicated, and I used to think that an excess of reduced folates would impair thiamine transport. But that article by Lever et al. (1986) suggests and cites articles supporting the idea that the opposite is true, that depletion of reduced folates can reduce thiamine transport. This article, by Tatyana Vlasova and coauthors (http://www.ncbi.nlm.nih.gov/pubmed/15623830?dopt=Abstract), shows that biotin depletion reduces the expression of (mRNA transcripts encoding) a biotin transporter, SLC19A3, by the lymphocytes of humans.
But another implication of the article is that some patients with Wernicke's encephalopathy might have encephalopathy due to depletion of reduced folates from the brain. It also might help explain the effects of folate on glycolysis (http://hardcorephysiologyfun.blogspot.com/2008/12/folic-acid-ribose-megaloblastic-anemia.html and http://hardcorephysiologyfun.blogspot.com/2008/12/first-posting-folate-and-glycolysis-in.html) or PRPP levels (http://hardcorephysiologyfun.blogspot.com/2008/12/nonoxidative-pentose-cycle-prpp-and.html), this relationship between thiamine transport and the intracellular reduced folate levels (as opposed to just reduced folate transport).
There's probably a lot more research on the reduced folate/thiamine overlap, but I'm not so much up for searching on it now. It seems really mind-bending.
http://jnnp.bmj.com/cgi/content/abstract/49/10/1203
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/3783183?dopt=Abstract)
(full text: http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=3783183)
Given that there are all those articles on "cerebral folate deficiency," a disorder that can look like mitochondrial encephalopathies but that can result from many causes and be an acquired "disorder," (http://www.ncbi.nlm.nih.gov/pubmed/16365882) I'll bet neurological symptoms due to methylfolate depletion from the cerebrospinal fluid is more common than it's usually assumed to be.
That article by Lever et al. (1986) basically shows, in part, that cerebral folate depletion mimics Wernicke's encephalopathy, a depletion of thiamine (vitamin B1), from the brain, that usually occurs in heavy drinkers or people with a history of alcoholism. The authors talk about that and show that the patient actually showed a short-lived reticulocyte response to thiamine and vitamin B12. They cite some articles showing that folate repletion can actually increase thiamine transport into cells, and that's the opposite of the effect I would have predicted.
There's actually been a lot of research, since that 1986 article came out, on overlap between thiamine and reduced folate transport. This article, by Rongbao Zhao et al., shows that RFC1, one of the reduced folate carriers, can transport thiamine into cells:
http://ajpcell.physiology.org/cgi/content/full/282/6/C1512
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11997266?dopt=Abstract)
There are lots of articles on this SLC19 family of transporters, and some of the thiamine transporters also transport biotin. In people with some genetic disorders, derangements of biotin transport can cause this devastating neurological disease that affects the basal ganglia (biotin-responsive basal ganglia disease) (http://ajpcell.physiology.org/cgi/content/full/291/5/C851 and pubmed: http://www.ncbi.nlm.nih.gov/pubmed/16790503?dopt=Abstract). Those authors (Veedamali Subramanian and coauthors) found evidence that the biotin wasn't exerting its effects by bypassing a deficient activity of one of the thiamine transporters, I think. The article is complicated, and I'm not so much up for re-reading parts of it now.
These articles on the overlap between thiamine and folate transport do tend to be complicated, and I used to think that an excess of reduced folates would impair thiamine transport. But that article by Lever et al. (1986) suggests and cites articles supporting the idea that the opposite is true, that depletion of reduced folates can reduce thiamine transport. This article, by Tatyana Vlasova and coauthors (http://www.ncbi.nlm.nih.gov/pubmed/15623830?dopt=Abstract), shows that biotin depletion reduces the expression of (mRNA transcripts encoding) a biotin transporter, SLC19A3, by the lymphocytes of humans.
But another implication of the article is that some patients with Wernicke's encephalopathy might have encephalopathy due to depletion of reduced folates from the brain. It also might help explain the effects of folate on glycolysis (http://hardcorephysiologyfun.blogspot.com/2008/12/folic-acid-ribose-megaloblastic-anemia.html and http://hardcorephysiologyfun.blogspot.com/2008/12/first-posting-folate-and-glycolysis-in.html) or PRPP levels (http://hardcorephysiologyfun.blogspot.com/2008/12/nonoxidative-pentose-cycle-prpp-and.html), this relationship between thiamine transport and the intracellular reduced folate levels (as opposed to just reduced folate transport).
There's probably a lot more research on the reduced folate/thiamine overlap, but I'm not so much up for searching on it now. It seems really mind-bending.
Great Article on the Pharmacological Aspects of Different Folates
This is a really great article on the pharmacokinetics of folic acid in relation to methylfolate, and the things the authors discuss are really good for me to learn and know:
http://www3.interscience.wiley.com/journal/118671262/abstract
It's one of the most insightful articles I've ever seen on folic acid, in some ways. One of the important things the authors mention, aside from their finding that the cells in the intestinal tract have a limited capacity to reduce and methylate folic acid [meaning that much of a dose of folic acid will enter the portal vein, liver, and circulation as folic acid and not 5-methyltetrahydrofolate (5-MTHF)], is that, especially for serum folate levels of 50 nM or higher (in a person who has taken only folic acid and not methylfolate), a significant percentage of that serum folate will be folic acid and not 5-MTHF. The commonly-used assay doesn't discriminate between 5-MTHF and folic acid, and the assumption in most of the articles is that serum folate is predominantly 5-MTHF. This apparently isn't the case. I'd been wondering about that, before I read it. I'd seen some articles talking about small increments in serum "unmetabolized" folic acid being of potential concern, in relation to T-cell proliferation, I think. I don't think it would be a concern at reasonable intakes of folic acid, but it could be at higher ones (the authors of that article I link to note that folic acid could accumulate intracellularly in the cells of extrahepatic tissues and disrupt the folate cycle).
The more important implication is that, at any given level of serum folate, one cannot tell what the impact, the benefit or even harm, of that serum folate value would be for extrahepatic cells (or for cells in the liver, for that matter). The liver would probably be the organ in which the accumulation of unmetabolized folic acid would become a dose-limiting concern, for a person taking folic acid (but not as much methylfolate). The lack of discrimination by the serum folate assay is a major issue for interpreting articles about serum folate correlations with diseases, etc. It's mainly because the folate receptor, the transporter that preferentially transports folic acid into cells, has a much lower capacity for transport than the reduced folate carrier, I think. I forget the details, but I'll have to read on that.
http://www3.interscience.wiley.com/journal/118671262/abstract
It's one of the most insightful articles I've ever seen on folic acid, in some ways. One of the important things the authors mention, aside from their finding that the cells in the intestinal tract have a limited capacity to reduce and methylate folic acid [meaning that much of a dose of folic acid will enter the portal vein, liver, and circulation as folic acid and not 5-methyltetrahydrofolate (5-MTHF)], is that, especially for serum folate levels of 50 nM or higher (in a person who has taken only folic acid and not methylfolate), a significant percentage of that serum folate will be folic acid and not 5-MTHF. The commonly-used assay doesn't discriminate between 5-MTHF and folic acid, and the assumption in most of the articles is that serum folate is predominantly 5-MTHF. This apparently isn't the case. I'd been wondering about that, before I read it. I'd seen some articles talking about small increments in serum "unmetabolized" folic acid being of potential concern, in relation to T-cell proliferation, I think. I don't think it would be a concern at reasonable intakes of folic acid, but it could be at higher ones (the authors of that article I link to note that folic acid could accumulate intracellularly in the cells of extrahepatic tissues and disrupt the folate cycle).
The more important implication is that, at any given level of serum folate, one cannot tell what the impact, the benefit or even harm, of that serum folate value would be for extrahepatic cells (or for cells in the liver, for that matter). The liver would probably be the organ in which the accumulation of unmetabolized folic acid would become a dose-limiting concern, for a person taking folic acid (but not as much methylfolate). The lack of discrimination by the serum folate assay is a major issue for interpreting articles about serum folate correlations with diseases, etc. It's mainly because the folate receptor, the transporter that preferentially transports folic acid into cells, has a much lower capacity for transport than the reduced folate carrier, I think. I forget the details, but I'll have to read on that.
Sample Calculation That Scales a Rat Dosage to a Human Dosage
I've done more of these crude scalings for articles on topics other than folic acid, and I'll try to put those up soon. But I wanted to apply this to a great article showing some dose-response data for folic acid in an animal model of breast cancer (http://carcin.oxfordjournals.org/cgi/content/full/26/9/1603). I'll try to get some equations for the ng/g tissue into intracellular molarity up on here, because that's useful to be able to do. I keep having difficulty finding the equations I've used in the past.
But the researchers, the authors of that article, used either 0, 2 mg/kg diet, or 8 mg/kg diet in rats. If one scales these dosages, using the 2,000 ug/kg diet dose for a rat, one gets 100 ug/kg bw for a rat. When I apply the 4.71 scaling factor, for scaling a rat dosage to a dosage for a 70-kg human (I'll try to put up a generic scaling equation, because I'm wondering how much the magnitude of the scaling factor would differ for someone who weighed 55 or 60 kg, for example), then 100 ug/kg bw for a rat would scale to 21.3 ug/kg/day for a human. This would translate into a dose of 1.49 mg of folic acid/day. Their 8 mg/kg diet dose in rats would be 5.96 mg/day of folic acid in a human. Even that higher dose did not elevate the intracellular folate levels in the cells in the mammary tissue of the rats. I did the calculation awhile back, and it's like below 1 uM. The intracellular total folate level is "assumed" to be 20 uM in human cells, but, as I've said before, that's unlikely to be the case. Their DNA methylation data also shows that the elevation in intracellular total folate was not large enough to really have much of an effect on the epithelial or other cells in the mammary tissues. That's a really important type of result that's relevant to humans, and the only way to see the full message of that article is to do the crude scaling calculations.
I'll try to read more about the bioavailability differences between folic acid and L-methylfolate, but the AUC and Cmax values in response to a dose of L-methylfolate, past a certain critical level, are several times those for an equimolar dose of folic acid. So one issue is probably the bioavailability and tissue distribution, with folic acid.
I know one could use a different number to scale the dosages, but even the traditional, standardized, non-species-specific factor of ten allows one to interpret the article in a meaningful way.
But the researchers, the authors of that article, used either 0, 2 mg/kg diet, or 8 mg/kg diet in rats. If one scales these dosages, using the 2,000 ug/kg diet dose for a rat, one gets 100 ug/kg bw for a rat. When I apply the 4.71 scaling factor, for scaling a rat dosage to a dosage for a 70-kg human (I'll try to put up a generic scaling equation, because I'm wondering how much the magnitude of the scaling factor would differ for someone who weighed 55 or 60 kg, for example), then 100 ug/kg bw for a rat would scale to 21.3 ug/kg/day for a human. This would translate into a dose of 1.49 mg of folic acid/day. Their 8 mg/kg diet dose in rats would be 5.96 mg/day of folic acid in a human. Even that higher dose did not elevate the intracellular folate levels in the cells in the mammary tissue of the rats. I did the calculation awhile back, and it's like below 1 uM. The intracellular total folate level is "assumed" to be 20 uM in human cells, but, as I've said before, that's unlikely to be the case. Their DNA methylation data also shows that the elevation in intracellular total folate was not large enough to really have much of an effect on the epithelial or other cells in the mammary tissues. That's a really important type of result that's relevant to humans, and the only way to see the full message of that article is to do the crude scaling calculations.
I'll try to read more about the bioavailability differences between folic acid and L-methylfolate, but the AUC and Cmax values in response to a dose of L-methylfolate, past a certain critical level, are several times those for an equimolar dose of folic acid. So one issue is probably the bioavailability and tissue distribution, with folic acid.
I know one could use a different number to scale the dosages, but even the traditional, standardized, non-species-specific factor of ten allows one to interpret the article in a meaningful way.
Hmmm....Magnesium (Acting Extracellularly or Intracellularly) May Be Able to Block nAChR Channels Directly
Before I did a couple of searches on this today, I'd never seen anything really addressing the mechanism by which Mg2+ inhibits neuromuscular transmission. But this shows a direct blocking effect on nicotonic ACh receptor (nAChR) channels (a direct, anticholinergic effect that would be a postsynaptic action, as opposed to being a presynaptic inhibition of ACh release):
http://jp.physoc.org/cgi/content/abstract/443/1/683?ijkey=df962793a2b4db5f6cce5c2df3863e9ad6e76405&keytype2=tf_ipsecsha
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/1726594?dopt=Abstract)
It's possible that those concentrations are somewhat close to intracellular concentrations of magnesium that would be reached after an intravenous magnesium treatment. Here's another one showing basically the same thing:
http://www.ncbi.nlm.nih.gov/pubmed/1978344?dopt=Abstract
The researchers, the authors of both articles, found that intracellular Mg2+ was more potent in blocking the nAChR channels than extracellular Mg2+. Wait...The authors say that the reverse is true at the neuromuscular junction, that Mg2+ would act more extracellularly in that case. That's interesting. It's also interesting because the usual explanation, in all the articles, is that Mg2+ acts presynaptically to inhibit ACh release and doesn't bind to postsynaptic ACh receptors. There could still be an effect of Mg2+ on calcium channels, etc., though, even if this channel-blocking effect at nAChRs is meaningful in vivo.
http://jp.physoc.org/cgi/content/abstract/443/1/683?ijkey=df962793a2b4db5f6cce5c2df3863e9ad6e76405&keytype2=tf_ipsecsha
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/1726594?dopt=Abstract)
It's possible that those concentrations are somewhat close to intracellular concentrations of magnesium that would be reached after an intravenous magnesium treatment. Here's another one showing basically the same thing:
http://www.ncbi.nlm.nih.gov/pubmed/1978344?dopt=Abstract
The researchers, the authors of both articles, found that intracellular Mg2+ was more potent in blocking the nAChR channels than extracellular Mg2+. Wait...The authors say that the reverse is true at the neuromuscular junction, that Mg2+ would act more extracellularly in that case. That's interesting. It's also interesting because the usual explanation, in all the articles, is that Mg2+ acts presynaptically to inhibit ACh release and doesn't bind to postsynaptic ACh receptors. There could still be an effect of Mg2+ on calcium channels, etc., though, even if this channel-blocking effect at nAChRs is meaningful in vivo.
Equations for Animal Food Intake and Dosage Conversion Factors Based on Allometric Scaling Data
I'm going to post the conversion factors for converting animal dosages, such as are described in "nutrition" articles in terms of mg of substance per kg of diet, into mg/kg body weight. This will help me avoid having to look up the equations in the future. I have to do these types of calculations, in some cases, along with some sort of between-species dosage scaling calculation, in order to be able to evaluate these articles in a meaningful way. This is actually kind of enjoyable to do, but it's disturbing to see the way small changes in the scaling "factors" can drastically alter the way one interprets the results of a given animal study.
I don't think that many doctors or scientists would be able to do these types of calculations in their heads or remember the conversion or scaling factors off the tops of their heads, and that limits the cross-disciplinary appeal of research in these areas. Researchers, particularly those who aren't working in nutrition research, wouldn't be able to appreciate the significance of the results. In some cases, it's literally not possible to tell what the concentrations or dosages are in an article, given the way the information is presented. An article might have important results, but, without a sense of the physiological contexts of the dosages that are producing the different responses, it's not possible to get much out of the article.
Here's one of a series of terrific articles on the anticonvulsant effects of oral guanosine or guanosine 5'-monophosphate: (http://www.ncbi.nlm.nih.gov/pubmed/17682941). The authors give the dosage in mg/kg bw and give the weights of the rats.
Many articles still express dosages in terms of mg/kg diet or percent of diet in (w/w). An expression of a dosage as, say, "guanosine at 0.0425 percent of diet (w/w)" (I'm writing this to help myself remember and allow me to not have to think about it in the future) is the same as an expression of the dosage as "0.0425 g guanosine/100 g diet." To convert "% of substance in diet (w/w)" to "mg substance/kg diet," multiply by 10^4 (10,000).
The conversion factors for converting mg/kg diet to mg/kg bw (along with the masses of typical animals) are from the WHO document I linked to in a past posting on interspecific scaling (http://www.who.int/entity/ipcs/food/jecfa/en/tox_guidelines.pdf). The authors of the WHO document evidently compiled those from multiple sources, and the numbers look similar to ones I've seen before. The mass of an adult rat looks high to me (400 g). I'd switch their number to 300 g (.3 kg), but I'm worried that that would require me to change their food intake values, etc.
The other equation is a dosage scaling equation, derived from allometric data across multiple species, from this article: (http://www.ncbi.nlm.nih.gov/pubmed/17612951), and I think the equation is basically the ratio of two separate solutions of the equations from this paper (one for the animal and one for the human): (http://www.springerlink.com/content/kn360g725382p2m6/). But here are all the relevant conversion factors and the "equation":
mass of adult rat: 0.4 kg (use 0.3 kg, or more typical range of .25 to .35 kg, if only doing scaling and no food conversions)
mass of post-weanling, young rat: 0.1 kg
mass of adult mouse: 0.02 kg (range: 0.01 to 0.03, if only doing scaling and no food conversions)
mass of chick: 0.4 kg
mass of guinea pig: 0.75 kg
mass of rabbit: 2 kg
To convert Y mg/kg diet into mg/kg bw for adult rats, multiply Y by 0.05
To convert Y mg/kg diet into mg/kg bw for young rats, multiply Y by 0.10
To convert Y mg/kg diet into mg/kg bw for adult mice, multiply Y by 0.15
For chicks, multiply Y mg/kg diet by 0.125
For guinea pigs, multiply Y mg/kg diet by .04
For rabbits, multiply Y mg/kg diet by .03
I'm sorry if this looks like a third-grade math assignment, but I always have to look these things up and try to remember where to look. Here's the "equation" for the scaling factor, using a p value of 0.7 (the mass units cancel out, obviously, but this will help to obviate the need for me to think about it in the future):
dosage-scaling factor = [(mass of human (in kg or other units)) / (mass of animal (in kg))]^(0.3)
The exponent of (0.3) is (1-p), and the scaling factors, the p values, range from 0.6 to 0.8 (the "p," here, is referred to as the scaling factor in the literature, but both I and others are applying the terminology loosely), depending on the type of variable one is considering. In the above equation, different p values basically take into account different allometric variables, as far as I understand it. Some of them are the specific metabolic rate, expressed in terms of calories per gram body mass, and the surface area of the body in relation to the mass (which also relates to mass-specific metabolic rate).
These are the "dosage-scaling factors," (I'm calling them that for ease of reference) based on that value of p = 0.7 (I'm using a body mass of 70 kg for a human, just because everyone uses that number):
Adult Rat: 4.71
Young Rat: 7.14
Adult Mouse: 5.79
Chick: 4.71
Guinea Pig: 3.90
Rabbit: 2.91
In the rat dose for oral guanosine monophosphate, used in those studies, is 7.5 mg/kg bw (their rats were actually .25-.35 kg, making the conversion factor different, but let's say it's 4.71), the human dose would be scaled to 1.6 mg/kg (112 mg/day for a human?). That sounds low to me, but it's clear that the effects of guanosine are stronger than those, for example, of inosine, which generally produces therapeutic effects at 100-200 mg/kg in all of the animal models.
I don't think that many doctors or scientists would be able to do these types of calculations in their heads or remember the conversion or scaling factors off the tops of their heads, and that limits the cross-disciplinary appeal of research in these areas. Researchers, particularly those who aren't working in nutrition research, wouldn't be able to appreciate the significance of the results. In some cases, it's literally not possible to tell what the concentrations or dosages are in an article, given the way the information is presented. An article might have important results, but, without a sense of the physiological contexts of the dosages that are producing the different responses, it's not possible to get much out of the article.
Here's one of a series of terrific articles on the anticonvulsant effects of oral guanosine or guanosine 5'-monophosphate: (http://www.ncbi.nlm.nih.gov/pubmed/17682941). The authors give the dosage in mg/kg bw and give the weights of the rats.
Many articles still express dosages in terms of mg/kg diet or percent of diet in (w/w). An expression of a dosage as, say, "guanosine at 0.0425 percent of diet (w/w)" (I'm writing this to help myself remember and allow me to not have to think about it in the future) is the same as an expression of the dosage as "0.0425 g guanosine/100 g diet." To convert "% of substance in diet (w/w)" to "mg substance/kg diet," multiply by 10^4 (10,000).
The conversion factors for converting mg/kg diet to mg/kg bw (along with the masses of typical animals) are from the WHO document I linked to in a past posting on interspecific scaling (http://www.who.int/entity/ipcs/food/jecfa/en/tox_guidelines.pdf). The authors of the WHO document evidently compiled those from multiple sources, and the numbers look similar to ones I've seen before. The mass of an adult rat looks high to me (400 g). I'd switch their number to 300 g (.3 kg), but I'm worried that that would require me to change their food intake values, etc.
The other equation is a dosage scaling equation, derived from allometric data across multiple species, from this article: (http://www.ncbi.nlm.nih.gov/pubmed/17612951), and I think the equation is basically the ratio of two separate solutions of the equations from this paper (one for the animal and one for the human): (http://www.springerlink.com/content/kn360g725382p2m6/). But here are all the relevant conversion factors and the "equation":
mass of adult rat: 0.4 kg (use 0.3 kg, or more typical range of .25 to .35 kg, if only doing scaling and no food conversions)
mass of post-weanling, young rat: 0.1 kg
mass of adult mouse: 0.02 kg (range: 0.01 to 0.03, if only doing scaling and no food conversions)
mass of chick: 0.4 kg
mass of guinea pig: 0.75 kg
mass of rabbit: 2 kg
To convert Y mg/kg diet into mg/kg bw for adult rats, multiply Y by 0.05
To convert Y mg/kg diet into mg/kg bw for young rats, multiply Y by 0.10
To convert Y mg/kg diet into mg/kg bw for adult mice, multiply Y by 0.15
For chicks, multiply Y mg/kg diet by 0.125
For guinea pigs, multiply Y mg/kg diet by .04
For rabbits, multiply Y mg/kg diet by .03
I'm sorry if this looks like a third-grade math assignment, but I always have to look these things up and try to remember where to look. Here's the "equation" for the scaling factor, using a p value of 0.7 (the mass units cancel out, obviously, but this will help to obviate the need for me to think about it in the future):
dosage-scaling factor = [(mass of human (in kg or other units)) / (mass of animal (in kg))]^(0.3)
The exponent of (0.3) is (1-p), and the scaling factors, the p values, range from 0.6 to 0.8 (the "p," here, is referred to as the scaling factor in the literature, but both I and others are applying the terminology loosely), depending on the type of variable one is considering. In the above equation, different p values basically take into account different allometric variables, as far as I understand it. Some of them are the specific metabolic rate, expressed in terms of calories per gram body mass, and the surface area of the body in relation to the mass (which also relates to mass-specific metabolic rate).
These are the "dosage-scaling factors," (I'm calling them that for ease of reference) based on that value of p = 0.7 (I'm using a body mass of 70 kg for a human, just because everyone uses that number):
Adult Rat: 4.71
Young Rat: 7.14
Adult Mouse: 5.79
Chick: 4.71
Guinea Pig: 3.90
Rabbit: 2.91
In the rat dose for oral guanosine monophosphate, used in those studies, is 7.5 mg/kg bw (their rats were actually .25-.35 kg, making the conversion factor different, but let's say it's 4.71), the human dose would be scaled to 1.6 mg/kg (112 mg/day for a human?). That sounds low to me, but it's clear that the effects of guanosine are stronger than those, for example, of inosine, which generally produces therapeutic effects at 100-200 mg/kg in all of the animal models.
Mechanism of Magnesium-Induced Inhibition of Acetylcholine Release
The thing I was getting at is that the known inhibitory effect of high levels of Mg2+ on neuromuscular acetylcholine (ACh) release could be secondary to its NMDA receptor antagonism. That inhibition of ACh release, by NMDA receptor antagonism in response to Mg2+, has been shown to occur in the striatum:
http://www.ncbi.nlm.nih.gov/pubmed/7908945?dopt=Abstract
I can't immediately find an article talking about the mechanism in relation to the potentiation of neuromuscular blocking drugs, but I'll bet it's something similar. My point is that magnesium is supposedly not a direct anticholinergic but is thought to block ACh release presynaptically, probably by decreasing calcium influx (such as by NMDA receptor antagonism or calcium channel blockade) into the motor neurons.
http://www.ncbi.nlm.nih.gov/pubmed/7908945?dopt=Abstract
I can't immediately find an article talking about the mechanism in relation to the potentiation of neuromuscular blocking drugs, but I'll bet it's something similar. My point is that magnesium is supposedly not a direct anticholinergic but is thought to block ACh release presynaptically, probably by decreasing calcium influx (such as by NMDA receptor antagonism or calcium channel blockade) into the motor neurons.
Note on "Recurarization" by Intravenous Magnesium
I was going to clarify something about the type of magnesium-induced neuromuscular effect that is described in this type of article:
http://www.ncbi.nlm.nih.gov/pubmed/8652332
This isn't caused by magnesium per se but by the fact that magnesium is reducing the release of ACh in the face of the already-blunted neuromuscular transmission, the effect of residual amounts of the neuromuscular blocking agent used during the operation. The magnesium potentiates the effect of the remaining amount of the nicotinic acetylcholine receptor antagonist (vecuronium, in one of those articles) (and in the reports that show actual recurarization in patients):
http://www.ncbi.nlm.nih.gov/pubmed/7734259
http://www.ncbi.nlm.nih.gov/pubmed/8652332
This isn't caused by magnesium per se but by the fact that magnesium is reducing the release of ACh in the face of the already-blunted neuromuscular transmission, the effect of residual amounts of the neuromuscular blocking agent used during the operation. The magnesium potentiates the effect of the remaining amount of the nicotinic acetylcholine receptor antagonist (vecuronium, in one of those articles) (and in the reports that show actual recurarization in patients):
http://www.ncbi.nlm.nih.gov/pubmed/7734259
Regulation of 5'-Nucleotidase Activities by Zinc and Magnesium
Here's one of the articles showing that magnesium can increase 5'-nucleotidase activity. I don't know which 5'-nucleotidase enzyme they're talking about here, actually, yet:
http://www.jbc.org/cgi/content/abstract/246/9/3057
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/4324346?dopt=Abstract)
That's the type of article that doesn't allow you to tell if the effect would be significant under normal circumstances. Magnesium normally helps buffer and maintain adenine nucleotide pools, and one mechanism explaining that buffering effect is the calcium-channel-blocking effect of magnesium (the other main one is to form MgATP2- and other complexes with adenine nucleotides). But it would probably be a double-edged sword, the effect of Mg2+ on 5'-nucleotidase activity. It would help provide extracellular adenosine, and that can be cardioprotective/antiatherogenic. But excessive amounts of Mg2+ could conceivably cause that kind of purine and pyrimidine "wasting" effect. It would probably only occur at some of the really high doses of magnesium some people talk about.
In the case of zinc, though, the activation of CD73 (ecto-5'-nucleotidase) activity and cytosolic 5'-nucleotidase activity by zinc would, in concert with other derangements of nucleotide metabolism that can be produced by excessive amounts of zinc, have more clear potential to be detrimental. CD73 activity is sensitive to changes in zinc intake and increases in response to zinc supplementation, but I just don't think this would necessarily be a good thing, really. That effect can help decrease platelet aggregation in the short term, but it doesn't mean that zinc "should" be increasing CD73 activity or that the CD73 enzyme is somehow subsaturated or deficient in activity in the absence of a large excess of zinc. This article talks a bit about the way zinc is a component of both 5'-nucleotidases and cyclic nucleotide phosphodiesterase enzymes:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1132794
To the extent that an excess of free zinc could increase the activities of phosphodiesterase enzymes or adenosine deaminase (a zinc metalloenzyme), those wouldn't necessarily be good things (especially in the case of phosphodiesterases). This article shows that zinc can directly inhibit adenylate cyclase, and the authors implicate this and other mechanisms in the neurotoxicity of zinc in diseases such as Alzheimer's:
http://www.jbc.org/cgi/content/full/277/14/11859
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11805091?dopt=Abstract)
That inhibition of adenylate cyclase was not due to the binding of Zn2+ or ZnATP or some other zinc-containing species to one of the two metal binding sites, given that Mg2+ did not antagonize the effects of Zn2+. Magnesium increases adenylate cyclase activity to some extent, and the authors of that article, above, note that adenylate cyclase has two (apparently catalytic or allosteric) metal binding sites that normally bind Mg2+. The presence of a catalytic binding site for a metal is different from the presence of a "metal binding function" of a protein. That study on zinc shows that adenylate cyclase can bind zinc and be inhibited by it, but that doesn't mean that adenylate cyclase is "supposed" to be bound by zinc. It's pretty clear that it's a toxic effect that can result from an excess of free zinc.
A lot of the research on the supposed nutritional effects of zinc seems strange to me. One thing I'm seeing, as I do some hasty searches on zinc and CD73, is that part of the enhancement of CD73 activity is apparently due to a zinc-induced increase in CD73 expression. That's not really a "nutritional" effect in the traditional sense, because the effect of zinc on CD73 activity doesn't really seem to emerge in a predictable or even saturable way. The CD73 enzyme has more than one zinc binding site, and I'm sure that some level of dietary zinc is required for some normal level of CD73 activity. I'm not saying zinc is not essential, because it obviously is. But an increase in CD73 activity in response to zinc does not mean that the CD73 activity, in the absence of that level of zinc availability to CD73 itself or to the zinc finger transcription factors that mediate an increase in the expression of the CD73 gene, was "deficient" before the extra zinc was provided. This article looks like a good one that might talk about some of these issues related to zinc safety:
http://www.ncbi.nlm.nih.gov/pubmed/16632171
Zinc supplementation seems questionable to me, and single servings of many "meats" contain 4 and 5 mg of zinc, etc. The case reports on neurodegeneration induced by high-dose zinc supplementation or intractable hyperzincemia are appalling, and the neurotoxic effects of an excess of free zinc are unlikely to be attributable to merely a zinc-induced decrease of copper availability to the brain (here's the posting in which I link to some of those case reports: (http://hardcorephysiologyfun.blogspot.com/2008/12/zinc-toxicity-and-parp.html). I've never seen anything like that in the context of hypermagnesemia.
Hypermagnesemia is difficult to even achieve, given that the kidneys rapidly clear an excess of Mg2+ in the blood. Hypermagnesemia can cause a strange sort of almost paralytic effect, by inhibiting acetylcholine release at the neuromuscular junction, and there are reports in the literature of postoperative, intravenous infusions of magnesium inducing "recurarization" (http://www.ncbi.nlm.nih.gov/pubmed/8652332). The effect may have to do with calcium channel blockade by magnesium, I think, but it's mainly from i.v. magnesium. I've never seen a report of magnesium-induced, direct neurotoxicity, of the kind that very high-dose zinc supplementation has been shown to produce.
This animal study shows "neurotoxicity" following the direct injection of large amounts of magnesium into the spinal cord (http://www.ncbi.nlm.nih.gov/pubmed/9124662), but the abstract also shows that lower-level, inhibitory effects on neuromuscular functions, effects that occurred in response to half that dose of intrathecal magnesium, were accompanied by "warning signs" of magnesium-induced sedation and anesthetic effects (effects that could be explained by Mg2+-mediated NMDA receptor antagonism and calcium channel blockade, at least partially). The point is that similar warning signs have usually been evident in humans with hypermagnesemia, and I haven't heard of similar warning signs with Zn2+. But the main idea is that Zn2+ is known to have the potential to be neurotoxic, and even intravenous Mg2+ hasn't really been associated with those kinds of effects.
This is off the original topic, but this article shows the dynamic and somewhat nonspecific quality with which some metals/minerals can interact in the body (and calcium and magnesium are actually essential). Many metals are not essential but can still have effects on the body. That's part of my rationale for exploring all of the mechanisms involved in these topics. This article shows that magnesium activates type 5 adenylate cyclase, and calcium can antagonize this effect. The authors discuss the way magnesium can allosterically activate adenylate cyclase or participate in the binding of a substrate, ATP (as MgATP2-), to adenylate cyclase:
http://www.jbc.org/cgi/content/full/277/36/33139
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/12065575?dopt=Abstract)
http://www.jbc.org/cgi/content/abstract/246/9/3057
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/4324346?dopt=Abstract)
That's the type of article that doesn't allow you to tell if the effect would be significant under normal circumstances. Magnesium normally helps buffer and maintain adenine nucleotide pools, and one mechanism explaining that buffering effect is the calcium-channel-blocking effect of magnesium (the other main one is to form MgATP2- and other complexes with adenine nucleotides). But it would probably be a double-edged sword, the effect of Mg2+ on 5'-nucleotidase activity. It would help provide extracellular adenosine, and that can be cardioprotective/antiatherogenic. But excessive amounts of Mg2+ could conceivably cause that kind of purine and pyrimidine "wasting" effect. It would probably only occur at some of the really high doses of magnesium some people talk about.
In the case of zinc, though, the activation of CD73 (ecto-5'-nucleotidase) activity and cytosolic 5'-nucleotidase activity by zinc would, in concert with other derangements of nucleotide metabolism that can be produced by excessive amounts of zinc, have more clear potential to be detrimental. CD73 activity is sensitive to changes in zinc intake and increases in response to zinc supplementation, but I just don't think this would necessarily be a good thing, really. That effect can help decrease platelet aggregation in the short term, but it doesn't mean that zinc "should" be increasing CD73 activity or that the CD73 enzyme is somehow subsaturated or deficient in activity in the absence of a large excess of zinc. This article talks a bit about the way zinc is a component of both 5'-nucleotidases and cyclic nucleotide phosphodiesterase enzymes:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1132794
To the extent that an excess of free zinc could increase the activities of phosphodiesterase enzymes or adenosine deaminase (a zinc metalloenzyme), those wouldn't necessarily be good things (especially in the case of phosphodiesterases). This article shows that zinc can directly inhibit adenylate cyclase, and the authors implicate this and other mechanisms in the neurotoxicity of zinc in diseases such as Alzheimer's:
http://www.jbc.org/cgi/content/full/277/14/11859
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11805091?dopt=Abstract)
That inhibition of adenylate cyclase was not due to the binding of Zn2+ or ZnATP or some other zinc-containing species to one of the two metal binding sites, given that Mg2+ did not antagonize the effects of Zn2+. Magnesium increases adenylate cyclase activity to some extent, and the authors of that article, above, note that adenylate cyclase has two (apparently catalytic or allosteric) metal binding sites that normally bind Mg2+. The presence of a catalytic binding site for a metal is different from the presence of a "metal binding function" of a protein. That study on zinc shows that adenylate cyclase can bind zinc and be inhibited by it, but that doesn't mean that adenylate cyclase is "supposed" to be bound by zinc. It's pretty clear that it's a toxic effect that can result from an excess of free zinc.
A lot of the research on the supposed nutritional effects of zinc seems strange to me. One thing I'm seeing, as I do some hasty searches on zinc and CD73, is that part of the enhancement of CD73 activity is apparently due to a zinc-induced increase in CD73 expression. That's not really a "nutritional" effect in the traditional sense, because the effect of zinc on CD73 activity doesn't really seem to emerge in a predictable or even saturable way. The CD73 enzyme has more than one zinc binding site, and I'm sure that some level of dietary zinc is required for some normal level of CD73 activity. I'm not saying zinc is not essential, because it obviously is. But an increase in CD73 activity in response to zinc does not mean that the CD73 activity, in the absence of that level of zinc availability to CD73 itself or to the zinc finger transcription factors that mediate an increase in the expression of the CD73 gene, was "deficient" before the extra zinc was provided. This article looks like a good one that might talk about some of these issues related to zinc safety:
http://www.ncbi.nlm.nih.gov/pubmed/16632171
Zinc supplementation seems questionable to me, and single servings of many "meats" contain 4 and 5 mg of zinc, etc. The case reports on neurodegeneration induced by high-dose zinc supplementation or intractable hyperzincemia are appalling, and the neurotoxic effects of an excess of free zinc are unlikely to be attributable to merely a zinc-induced decrease of copper availability to the brain (here's the posting in which I link to some of those case reports: (http://hardcorephysiologyfun.blogspot.com/2008/12/zinc-toxicity-and-parp.html). I've never seen anything like that in the context of hypermagnesemia.
Hypermagnesemia is difficult to even achieve, given that the kidneys rapidly clear an excess of Mg2+ in the blood. Hypermagnesemia can cause a strange sort of almost paralytic effect, by inhibiting acetylcholine release at the neuromuscular junction, and there are reports in the literature of postoperative, intravenous infusions of magnesium inducing "recurarization" (http://www.ncbi.nlm.nih.gov/pubmed/8652332). The effect may have to do with calcium channel blockade by magnesium, I think, but it's mainly from i.v. magnesium. I've never seen a report of magnesium-induced, direct neurotoxicity, of the kind that very high-dose zinc supplementation has been shown to produce.
This animal study shows "neurotoxicity" following the direct injection of large amounts of magnesium into the spinal cord (http://www.ncbi.nlm.nih.gov/pubmed/9124662), but the abstract also shows that lower-level, inhibitory effects on neuromuscular functions, effects that occurred in response to half that dose of intrathecal magnesium, were accompanied by "warning signs" of magnesium-induced sedation and anesthetic effects (effects that could be explained by Mg2+-mediated NMDA receptor antagonism and calcium channel blockade, at least partially). The point is that similar warning signs have usually been evident in humans with hypermagnesemia, and I haven't heard of similar warning signs with Zn2+. But the main idea is that Zn2+ is known to have the potential to be neurotoxic, and even intravenous Mg2+ hasn't really been associated with those kinds of effects.
This is off the original topic, but this article shows the dynamic and somewhat nonspecific quality with which some metals/minerals can interact in the body (and calcium and magnesium are actually essential). Many metals are not essential but can still have effects on the body. That's part of my rationale for exploring all of the mechanisms involved in these topics. This article shows that magnesium activates type 5 adenylate cyclase, and calcium can antagonize this effect. The authors discuss the way magnesium can allosterically activate adenylate cyclase or participate in the binding of a substrate, ATP (as MgATP2-), to adenylate cyclase:
http://www.jbc.org/cgi/content/full/277/36/33139
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/12065575?dopt=Abstract)
Monday, December 29, 2008
5'-Nucleotidase and Interactions of Purine and Pyrimidine Metabolism
I was confused by the discussion, in another article, of this article:
http://www.pnas.org/content/94/21/11601.full
That article shows that these people had reductions in uric acid excretion, probably as a result of their elevations in 5'-nucleotidase activity, and reductions in PRPP levels. Uridine decreased 5'-nucleotidase activity. This is an interesting article and suggests that uridine would help to oppose the increase 5'-nucleotidase activity, by some mechanism, that, as the authors of this article suggested (http://www.ncbi.nlm.nih.gov/pubmed/10871303?dopt=Abstract), may have occurred in response to the oral ATP. Another article, though, showed that thymidine or folinic acid, but not uridine, decreased the methotrexate-induced augmentation of 5'-ectonucleotidase activity:
http://www.jimmunol.org/cgi/content/full/167/5/2911
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11509639?dopt=Abstract)
I can't really interpret that finding, in the case of the lack of effect of uridine , without more information (I'm talking about the above article). The authors of the first article, showing the uridine-induced decrease in 5'-nucleotidase activity, suggested that the "high-Km," cytosolic 5'-nucleotidase enzyme was the one whose activity had been suppressed in response to exogenous uridine. They also noted that some of the people had gotten benefit from ribose, but the benefits from uridine had been much more significant. That's one thing I've been trying to understand, why that's the case. I understand that uridine elevates the pools of many pyrimidines in cells, and I understand that exogenous uridine, as the authors mentioned, can increase intracellular PRPP levels by sparing PRPP that would otherwise be used in orotate phosphoribosyltransferase activity. But I think there are some other mechanisms at work.
I know one could say that uridine's effect would only occur in these people in the study, and that may be true. But I think the article has broader relevance. It's a great article.
http://www.pnas.org/content/94/21/11601.full
That article shows that these people had reductions in uric acid excretion, probably as a result of their elevations in 5'-nucleotidase activity, and reductions in PRPP levels. Uridine decreased 5'-nucleotidase activity. This is an interesting article and suggests that uridine would help to oppose the increase 5'-nucleotidase activity, by some mechanism, that, as the authors of this article suggested (http://www.ncbi.nlm.nih.gov/pubmed/10871303?dopt=Abstract), may have occurred in response to the oral ATP. Another article, though, showed that thymidine or folinic acid, but not uridine, decreased the methotrexate-induced augmentation of 5'-ectonucleotidase activity:
http://www.jimmunol.org/cgi/content/full/167/5/2911
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11509639?dopt=Abstract)
I can't really interpret that finding, in the case of the lack of effect of uridine , without more information (I'm talking about the above article). The authors of the first article, showing the uridine-induced decrease in 5'-nucleotidase activity, suggested that the "high-Km," cytosolic 5'-nucleotidase enzyme was the one whose activity had been suppressed in response to exogenous uridine. They also noted that some of the people had gotten benefit from ribose, but the benefits from uridine had been much more significant. That's one thing I've been trying to understand, why that's the case. I understand that uridine elevates the pools of many pyrimidines in cells, and I understand that exogenous uridine, as the authors mentioned, can increase intracellular PRPP levels by sparing PRPP that would otherwise be used in orotate phosphoribosyltransferase activity. But I think there are some other mechanisms at work.
I know one could say that uridine's effect would only occur in these people in the study, and that may be true. But I think the article has broader relevance. It's a great article.
Embryonic Lethality in Isoprenylcysteine Carboxyl Methyltransferase Knockout Mice
Here's an interesting article that shows that Icmt knockout mice, mutant mice engineered to have no functional isoprenylcysteine carboxyl methyltransferase (ICMT) enzymes, die during embryonic development. One could say that this only shows the importance of ICMT during development, but, when it's viewed alongside all the other research on ICMT in relation to endothelial cells, it also suggests that ICMT activity just has pretty significant effects in lots of different cell types (during both development and adulthood):
http://www.jbc.org/cgi/content/full/276/8/5841
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11121396?dopt=Abstract)
http://www.jbc.org/cgi/content/full/276/8/5841
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11121396?dopt=Abstract)
PP2A Methylation and APP Processing
Here's an article that talks about the role that methylated PP2A can have in dephosphorylating tau and also in reducing the cleavage of amyloid precursor protein, by beta-secretase into Abeta1-42 The authors say that the methylation of Balpha actually governs the way Balpha and the other subunits come together, and the change in the subunit composition apparently changes the substrate-specificity of PP2A in a way that allows PP2A to dephosphorylate tau. I'd been thinking that Balpha was incorporated into PP2A and then methylated, but apparently it's the case that Balpha is methylated before it associates with other subunits. The methylation of the Balpha subunit allows methylated Balpha to associate with other types of PP2A subunits and form heterotrimers, and the heterotrimeric, methylated-Balpha-containing form of PP2A is the one that's most active in dephosphorylating tau.
It's more complicated than I thought it was. They say that APP phosphorylation can enhance soluble APPalpha (sAPPalpha) and APPbeta (sAPPbeta) and be amyloidogenic under some circumstances. The authors also say that methylation of PP2A can help to dephosphorylate APP and thereby reduce the amyloidogenic APP cleavage into Abeta1-40 and Abeta1-42 (under some circumstances). It's important to remember that it's just one mechanism, but it's interesting. It looks like there's a lot of research, now, on the effects that methyltransferases can have on traditional, Alzheimer's-associated processes:
http://www.jneurosci.org/cgi/content/full/27/11/2751
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/17360897)
It's more complicated than I thought it was. They say that APP phosphorylation can enhance soluble APPalpha (sAPPalpha) and APPbeta (sAPPbeta) and be amyloidogenic under some circumstances. The authors also say that methylation of PP2A can help to dephosphorylate APP and thereby reduce the amyloidogenic APP cleavage into Abeta1-40 and Abeta1-42 (under some circumstances). It's important to remember that it's just one mechanism, but it's interesting. It looks like there's a lot of research, now, on the effects that methyltransferases can have on traditional, Alzheimer's-associated processes:
http://www.jneurosci.org/cgi/content/full/27/11/2751
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/17360897)
More on Protein Carboxymethyltransferases
The article on PP2A methylation (I referred to it in a posting yesterday) is interesting. One thing the researchers found was that the SAM-e/SAH ratio in the striatum was insensitive to folate repletion and was still decreased. This didn't increase tau phosphorylation in the striatum, though, because, as the researchers discussed, the Balpha subunit of PP2A was being expressed at a high level in the striatum. The other main point of that article is that Balpha or PP2A overexpression, more broadly, can overcome a lack of PP2A methylation (produced by something like folate depletion).
I read a little bit on protein carboxymethyltransferase enzymes, and this article shows that PCMT enzymes can be either membrane-bound or soluble (the soluble PCMT enzymes exert their activity in the cytosol). This article also shows that the protein content of some PCMT isoforms is high in the brain:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1135729
It sounds like the substrate-specificity may be broad and that the same enzymes can methylate D-aspartyl residues or L-isoaspartyl residues or other residues, but I'm not sure about that. Those "residues" are actually sites at which asparagine residues have become spontaneously deamidated or undergone some other reaction to produce an inversion at a chiral carbon atom and produce D-aspartyl residues. I actually haven't read about the nomenclature and chemistry, but here's an article that shows some of it:(http://www.jstage.jst.go.jp/article/bpb/28/9/1585/_pdf). That article talks about the vulnerability of alpha-crystallin proteins, in the lens of the eye, to those spontaneous changes. I don't know if PCMTs are abundant enough, in the cells of the lens, to meaningfully repair the "damaged" proteins.
Another big action of PCMT enzymes is to methylate isoprenylated carboxyl terminals of proteins. I think this is one area in which research on folic acid and homocysteine overlaps with some of the effects of statins. This article shows that adenosine and homocysteine, by their effect of increasing S-adenosylhomocysteine levels, reduced the methylation of p21ras and thereby reduced the normal, mitogen-activated, p21ras- and ERK 1/2-mediated growth of endothelial cells. It's one mechanism of homocysteine atherogenicity:
http://www.jbc.org/cgi/content/full/272/40/25380
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/9312159?dopt=Abstract)
That's actually a good type of article because it shows the metabolic effects of elevated homocysteine (decreasing the hydrolysis of S-adenosylhomocysteine, which then inhibits the PCMT and other methyltransferase enzymes), as opposed to the less-physiologically-relevant, direct, toxic effects of homocysteine. I haven't read very much of it, but there's a lot of research on the protein targets and cellular effects of isoprenylcysteine carboxymethyltransferases.
I read a little bit on protein carboxymethyltransferase enzymes, and this article shows that PCMT enzymes can be either membrane-bound or soluble (the soluble PCMT enzymes exert their activity in the cytosol). This article also shows that the protein content of some PCMT isoforms is high in the brain:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1135729
It sounds like the substrate-specificity may be broad and that the same enzymes can methylate D-aspartyl residues or L-isoaspartyl residues or other residues, but I'm not sure about that. Those "residues" are actually sites at which asparagine residues have become spontaneously deamidated or undergone some other reaction to produce an inversion at a chiral carbon atom and produce D-aspartyl residues. I actually haven't read about the nomenclature and chemistry, but here's an article that shows some of it:(http://www.jstage.jst.go.jp/article/bpb/28/9/1585/_pdf). That article talks about the vulnerability of alpha-crystallin proteins, in the lens of the eye, to those spontaneous changes. I don't know if PCMTs are abundant enough, in the cells of the lens, to meaningfully repair the "damaged" proteins.
Another big action of PCMT enzymes is to methylate isoprenylated carboxyl terminals of proteins. I think this is one area in which research on folic acid and homocysteine overlaps with some of the effects of statins. This article shows that adenosine and homocysteine, by their effect of increasing S-adenosylhomocysteine levels, reduced the methylation of p21ras and thereby reduced the normal, mitogen-activated, p21ras- and ERK 1/2-mediated growth of endothelial cells. It's one mechanism of homocysteine atherogenicity:
http://www.jbc.org/cgi/content/full/272/40/25380
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/9312159?dopt=Abstract)
That's actually a good type of article because it shows the metabolic effects of elevated homocysteine (decreasing the hydrolysis of S-adenosylhomocysteine, which then inhibits the PCMT and other methyltransferase enzymes), as opposed to the less-physiologically-relevant, direct, toxic effects of homocysteine. I haven't read very much of it, but there's a lot of research on the protein targets and cellular effects of isoprenylcysteine carboxymethyltransferases.
Inverse Correlation of Homocysteine With Platelet Count
I just came across this abstract, and I remember seeing it. I haven't read it yet, but it looks interesting:
http://www.ncbi.nlm.nih.gov/pubmed/16011963
The abstract says that, in this group of people the researchers studied, homocysteine levels correlated inversely with the platelet counts of the people and correlated positively with markers of the activation of endothelial cells and platelets. The abstract would imply that the platelet count would be higher upon a reduction in plasma total homocysteine, such as with a reduced folate, but would be, ideally, accompanied by decreases in platelet reactivity and endothelial cell activation/inflammation. This wouldn't necessarily happen, and reducing a disease marker doesn't necessarily fix the disease process that the markers are associated with. There's also the fact that, in a person who has various kinds of autoimmune or inflammatory or prothrombotic disease states, an increase in the platelet count or in lymphocyte proliferation, in response to something like a reduced folate, could exacerbate the condition or negate the effectiveness of a treatment.
http://www.ncbi.nlm.nih.gov/pubmed/16011963
The abstract says that, in this group of people the researchers studied, homocysteine levels correlated inversely with the platelet counts of the people and correlated positively with markers of the activation of endothelial cells and platelets. The abstract would imply that the platelet count would be higher upon a reduction in plasma total homocysteine, such as with a reduced folate, but would be, ideally, accompanied by decreases in platelet reactivity and endothelial cell activation/inflammation. This wouldn't necessarily happen, and reducing a disease marker doesn't necessarily fix the disease process that the markers are associated with. There's also the fact that, in a person who has various kinds of autoimmune or inflammatory or prothrombotic disease states, an increase in the platelet count or in lymphocyte proliferation, in response to something like a reduced folate, could exacerbate the condition or negate the effectiveness of a treatment.
Adenosine: Mechanisms of Cytotoxicity and Vasculoprotection
This is a really terrific article on adenosine in the context of ischemia and endothelial cell proliferation. In general, substances (growth factors, nutrients, or other compounds) that endothelial cell proliferation have the potential, under the correct circumstances, to be vasculoprotective and antiatherogenic, and things that enhance vascular smooth muscle cell proliferation can lead to hypertension or atherogenesis, etc. This article talks about the way adenosine enhances endothelial cell proliferation at extracellular concentrations up to slightly less than 1 mM, which is supraphysiological, and then tends to cause cytotoxic effects (endothelial cell apoptosis) at concentrations above 1 mM:
http://ajpregu.physiology.org/cgi/content/full/289/2/R283
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/16014444?dopt=Abstract)
This is relevant to the vasculoprotective effects of things like physical exercise, because exercise causes the muscles to export purines into the blood. But the graph the author includes, showing the concentration-dependence of the pro-apoptotic effects of adenosine, is really important for understanding the interactions of locally-produced adenosine with S-adenosylhomocysteine hydrolase activity. The pro-apoptotic effects of adenosine have generally been shown to be mediated by inhibition of S-adenosylhomocysteine hydrolase activity, but this effect doesn't seem to occur until the extracellular adenosine concentration is around 1 mM or higher. Even though the authors of this article use a 1 mM adenosine concentration, the finding that SAHH inhibition mediates its apoptotic effects in cultured hepatocytes is important for research on, or understanding of, the effects and safety of exogenous purines:
http://www.ncbi.nlm.nih.gov/pubmed/17097637
That's still a great article, and I was looking at that today. The implication is that, if exogenous nucleotides (particularly purines, in this context) were used for some sort of therapeutic purpose, one would want to maximize the methionine synthase-mediated decrease in the inwardly-directed transmembrane adenosine gradient, such as with a reduced folate, to minimize the potential for SAHH inhibition. I saw some reference in an article that imply that adenosine has the potential to cause mechanism-based inhibition of SAHH, but I'll have to read up on that. I thought it was just allosteric inhibition.
I actually haven't looked through this article yet, but it's on the transmembrane adenosine gradient in general and looks at the ratio of intracellular to extracellular adenosine levels:
http://circ.ahajournals.org/cgi/content/full/99/15/2041
(http://www.ncbi.nlm.nih.gov/pubmed/10209010?dopt=Abstract)
This is another one that's good:
http://cardiovascres.oxfordjournals.org/cgi/content/full/59/2/271
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/12909310?dopt=Abstract)
http://ajpregu.physiology.org/cgi/content/full/289/2/R283
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/16014444?dopt=Abstract)
This is relevant to the vasculoprotective effects of things like physical exercise, because exercise causes the muscles to export purines into the blood. But the graph the author includes, showing the concentration-dependence of the pro-apoptotic effects of adenosine, is really important for understanding the interactions of locally-produced adenosine with S-adenosylhomocysteine hydrolase activity. The pro-apoptotic effects of adenosine have generally been shown to be mediated by inhibition of S-adenosylhomocysteine hydrolase activity, but this effect doesn't seem to occur until the extracellular adenosine concentration is around 1 mM or higher. Even though the authors of this article use a 1 mM adenosine concentration, the finding that SAHH inhibition mediates its apoptotic effects in cultured hepatocytes is important for research on, or understanding of, the effects and safety of exogenous purines:
http://www.ncbi.nlm.nih.gov/pubmed/17097637
That's still a great article, and I was looking at that today. The implication is that, if exogenous nucleotides (particularly purines, in this context) were used for some sort of therapeutic purpose, one would want to maximize the methionine synthase-mediated decrease in the inwardly-directed transmembrane adenosine gradient, such as with a reduced folate, to minimize the potential for SAHH inhibition. I saw some reference in an article that imply that adenosine has the potential to cause mechanism-based inhibition of SAHH, but I'll have to read up on that. I thought it was just allosteric inhibition.
I actually haven't looked through this article yet, but it's on the transmembrane adenosine gradient in general and looks at the ratio of intracellular to extracellular adenosine levels:
http://circ.ahajournals.org/cgi/content/full/99/15/2041
(http://www.ncbi.nlm.nih.gov/pubmed/10209010?dopt=Abstract)
This is another one that's good:
http://cardiovascres.oxfordjournals.org/cgi/content/full/59/2/271
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/12909310?dopt=Abstract)
The Relevance of 5'-Nucleotidase Activity
My reason for looking into the 5'-nucleotidase issue is that I saw, in these two papers, that exogenous purines (http://jpet.aspetjournals.org/cgi/content/abstract/294/1/126 or http://www.ncbi.nlm.nih.gov/pubmed/10871303?dopt=Abstract) and pyrimidines (reference 28, cited in this: http://www.ncbi.nlm.nih.gov/pubmed/10354618) have both, separately, been shown or suggested to increase 5'-nucleotidase activity in those articles. It might be a factor limiting the dosages. But given that AICAR may be able to indirectly increase 5'-nucleotidase activity in methotrexate-treated cells, my thinking was that reduced folates could reduce 5'-nucleotidase activity by reducing AICAR accumulation. That effect could limit the indirect enhancement of 5'-nucleotidase activity by AICAR, thereby potentially buffering some of these apparently extreme enhancements of 5'-nucleotidase activity by exogenous nucleotides.
Sunday, December 28, 2008
S-adenosylhomocysteine Hydrolase, Intracellular Adenosine, and AICAR
This issue with methotrexate elevating extracellular adenosine (and intracellular adenosine) is really complicated, and no one has a complete handle on it. It's confusing. The main issue is that both methotrexate (an antifolate) and folate repletion, by either folic acid or reduced folates, elevate extracellular adenosine concentrations. But methotrexate produces this elevation by mechanisms that are, at least to some extent, the opposite of the mechanisms by which folate elevates the extracellular adenosine concentration. If exogenous purines and pyrimidines are ever going to be used in an effective way, in some adjunctive role, to treat traumatic brain injuries or neurodegenerative diseases (or liver disease for that matter), an understanding of these confusing aspects of purine metabolism will be helpful.
There are lots of possible mechanisms by which AICAR or phosphate sequestration or both (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1896274) (along with the combined interactions of those factors with glucose and adenine nucleotide metabolism as a whole) could be involved in this, but I'm going to focus on AICAR and on the activities of S-adenosylhomocysteine hydrolase (SAHH) and 5'-nucleotidase. I'm thinking about the way 5'-nucleotidase activity fits into this because of some articles I was looking at. This one shows that 5'-nucleotidase activity, in the monocytes from the blood of some people taking methotrexate, was decreased in those who were showing evidence of liver toxicity from methotrexate:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1753961
But that effect emerged only after 6 weeks, and many more articles show that the methotrexate-induced increases in extracellular adenosine ultimately depend on ecto-5'-nucleotidase (CD73) activity (on an increase in the flux of substrates through CD73). Here's one that gets at the core of the usual explanation for the way methotrexate elevates extracellular adenosine:
http://www.jimmunol.org/cgi/content/full/167/5/2911
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11509639?dopt=Abstract)
The usual explanation is that AICAR (ZMP) accumulates and inhibits AMP deaminase, thereby causing AMP to accumulate (by preventing its conversion into IMP). The AMP then is thought to serve as a substrate for its ecto-5'-nucleotidase-mediated conversion into adenosine, thereby elevating extracellular adenosine and producing anti-inflammatory effects by activating plasma membrane adenosine (in the above article, A2b adenosine) receptors.
But given that the inhibition of SAHH may be the predominant mechanism by which AICAR (http://cat.inist.fr/?aModele=afficheN&cpsidt=3336180) mediates the effects of methotrexate, it's possible that the inhibition of SAHH occurs, allows the inwardly-directed transmembrane adenosine gradient to intensify (as in folate depletion and Hcy elevation), and also occurs in the face of limited AICAR-mediated inhibition of AMP deaminase activity. That would allow the ecto-5'-nucleotidase-mediated conversion of AMP into extracellular adenosine, and the higher levels of extracellular adenosine could both activate adenosine receptors and contribute to a supposed methotrexate-mediated increase in the influx of adenosine. Adenosine also can inhibit SAHH by binding to allosteric site(s) on SAHH. In any case, this would explain some of the discrepancies in the research. I'll have to finish this topic and organize the rest of the papers at some other time.
There are lots of possible mechanisms by which AICAR or phosphate sequestration or both (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1896274) (along with the combined interactions of those factors with glucose and adenine nucleotide metabolism as a whole) could be involved in this, but I'm going to focus on AICAR and on the activities of S-adenosylhomocysteine hydrolase (SAHH) and 5'-nucleotidase. I'm thinking about the way 5'-nucleotidase activity fits into this because of some articles I was looking at. This one shows that 5'-nucleotidase activity, in the monocytes from the blood of some people taking methotrexate, was decreased in those who were showing evidence of liver toxicity from methotrexate:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1753961
But that effect emerged only after 6 weeks, and many more articles show that the methotrexate-induced increases in extracellular adenosine ultimately depend on ecto-5'-nucleotidase (CD73) activity (on an increase in the flux of substrates through CD73). Here's one that gets at the core of the usual explanation for the way methotrexate elevates extracellular adenosine:
http://www.jimmunol.org/cgi/content/full/167/5/2911
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/11509639?dopt=Abstract)
The usual explanation is that AICAR (ZMP) accumulates and inhibits AMP deaminase, thereby causing AMP to accumulate (by preventing its conversion into IMP). The AMP then is thought to serve as a substrate for its ecto-5'-nucleotidase-mediated conversion into adenosine, thereby elevating extracellular adenosine and producing anti-inflammatory effects by activating plasma membrane adenosine (in the above article, A2b adenosine) receptors.
But given that the inhibition of SAHH may be the predominant mechanism by which AICAR (http://cat.inist.fr/?aModele=afficheN&cpsidt=3336180) mediates the effects of methotrexate, it's possible that the inhibition of SAHH occurs, allows the inwardly-directed transmembrane adenosine gradient to intensify (as in folate depletion and Hcy elevation), and also occurs in the face of limited AICAR-mediated inhibition of AMP deaminase activity. That would allow the ecto-5'-nucleotidase-mediated conversion of AMP into extracellular adenosine, and the higher levels of extracellular adenosine could both activate adenosine receptors and contribute to a supposed methotrexate-mediated increase in the influx of adenosine. Adenosine also can inhibit SAHH by binding to allosteric site(s) on SAHH. In any case, this would explain some of the discrepancies in the research. I'll have to finish this topic and organize the rest of the papers at some other time.
Reperfusion Injury vs. Milder "Issues" With Vasodilators
The things a person would want to particularly talk with his or her doctor about, in relation to the vasodilatory effects of things like reduced folates, would be the presence of some kind of vascular disease or thrombogenic condition. For example, L-methylfolate can scavenge peroxynitrite at concentrations in the range of 1 uM, to a degree that was comparable to the effect of 1 uM uric acid in this study:
http://www.ncbi.nlm.nih.gov/pubmed/16940192
That effect would tend to protect against reperfusion injury or, in this case, issues. But the initial effect would be to dilate the blood vessels by increasing nitric oxide release from endothelial cells, and this could cause low-level reperfusion-induced inflammation or "issues" in a person with existing peripheral arterial disease (a generic category). Even aside from the reperfusion issues (as opposed to overt injury), an acute increase in nitric oxide levels is known to inhibit platelet function and could thereby cause bleeding in a vulnerable individual. Here's one article showing the inhibition of platelet function by nitric oxide (a fairly reliable effect, even though there tends to be tolerance to this effect):
http://www.ncbi.nlm.nih.gov/pubmed/9731013
The combination of a low-level antiplatelet effect of nitric oxide and vasodilation, by whatever mechanisms, could potentially be bad, in the short term, in some people. Even though many things that enhance nitric oxide bioavailability and produce vasodilation protect against reperfusion injury, this is not always true (especially when there's some kind of thrombogenic tendency).
This is an extreme example of reperfusion injury, and these effects are not likely to occur with something like a reduced folate. I guess it's in some obscure journal, but there are countless, similar case reports. But in this extreme example, a person had surgery to restore blood flow to one leg. After the surgery, the person's leg swelled up, because of reperfusion-induced rhabdomyolysis (lysis of muscle cells). The person died of renal failure and, secondarily, multiple organ failure from the accumulation of myoglobin and other muscle-derived proteins in the kidneys (via the blood). Myoglobinuria is a major mechanism of renal failure due to rhabdomyolysis. But the point is that sudden vasodilation can produce reperfusion injuries, under some conditions (such as an existing thrombogenic condition, etc.), and the increased tissue oxygenation, resulting from the increase in blood flow, can cause very severe "free-radical" damage that far outweighs any benefit, in the short term, that might come from the restoration of blood flow. In any case, it's worthwhile for a person and his or her doctor to be aware of the range of mechanisms and the short-term vs. long-term mechanisms:
http://www.tsoc.org.tw/db/Jour/1/200404/17.pdf
http://www.ncbi.nlm.nih.gov/pubmed/16940192
That effect would tend to protect against reperfusion injury or, in this case, issues. But the initial effect would be to dilate the blood vessels by increasing nitric oxide release from endothelial cells, and this could cause low-level reperfusion-induced inflammation or "issues" in a person with existing peripheral arterial disease (a generic category). Even aside from the reperfusion issues (as opposed to overt injury), an acute increase in nitric oxide levels is known to inhibit platelet function and could thereby cause bleeding in a vulnerable individual. Here's one article showing the inhibition of platelet function by nitric oxide (a fairly reliable effect, even though there tends to be tolerance to this effect):
http://www.ncbi.nlm.nih.gov/pubmed/9731013
The combination of a low-level antiplatelet effect of nitric oxide and vasodilation, by whatever mechanisms, could potentially be bad, in the short term, in some people. Even though many things that enhance nitric oxide bioavailability and produce vasodilation protect against reperfusion injury, this is not always true (especially when there's some kind of thrombogenic tendency).
This is an extreme example of reperfusion injury, and these effects are not likely to occur with something like a reduced folate. I guess it's in some obscure journal, but there are countless, similar case reports. But in this extreme example, a person had surgery to restore blood flow to one leg. After the surgery, the person's leg swelled up, because of reperfusion-induced rhabdomyolysis (lysis of muscle cells). The person died of renal failure and, secondarily, multiple organ failure from the accumulation of myoglobin and other muscle-derived proteins in the kidneys (via the blood). Myoglobinuria is a major mechanism of renal failure due to rhabdomyolysis. But the point is that sudden vasodilation can produce reperfusion injuries, under some conditions (such as an existing thrombogenic condition, etc.), and the increased tissue oxygenation, resulting from the increase in blood flow, can cause very severe "free-radical" damage that far outweighs any benefit, in the short term, that might come from the restoration of blood flow. In any case, it's worthwhile for a person and his or her doctor to be aware of the range of mechanisms and the short-term vs. long-term mechanisms:
http://www.tsoc.org.tw/db/Jour/1/200404/17.pdf
Interspecific Scaling of Dosages for Nutrients or Physiological Substances
I was converting some of these dosages, communicated in journal articles as "mg per kg rat chow" or "mg per 100 g diet" of rats and mice, and I guess I just think there are some issues with the application of pharmacological principles to research in nutrition journals. I know that you want the animals to ingest whatever it is you're studying, given that intravenous or intraperitoneal injections produce different distributions of the substance than oral administration does. But it seems to me that dosages and concentrations could be brought into the 21st century, so to speak. Frequently, intracellular concentrations are not given in journal articles, such as for intracellular total folates, but are given as nanomoles/gram wet weight of tissue. Some researchers do conversions that can approximate intracellular concentrations in molarity (moles per liter of intracellular fluid) from those types of units, and I did those conversions for some of the articles in my folic acid paper. I didn't include them, even though researchers often have to include approximations like that in journal articles. But it took about an hour to do for one part of one paper, and it just seems to me....
I don't want to complain, given that I have the entire world of scientific knowledge at my fingertips. And I'm aware of the equations for rats and mice and other animals, converting mg/kg diet into mg/kg body weight. I'm also aware that a dosage for a rat or a mouse tends to be divided by 10, given that rodents have higher metabolic rates and higher surface-area-to-volume ratios than humans and different ratios of organs to body weights. But in the folic acid paper, I didn't do the one-tenth scaling (that's the least sophisticated and most crude rule of thumb for interspecific, or between-species, scaling). This was partly because some of the articles comparing dosages between rodents and humans didn't do the scaling, and I also wanted to show that the scaling doesn't seem to work very well. Anyway, I'll try to post some of these basic "mg per kg rat chow" calculations for some of the articles. I think maybe the scaling calculations should be applied more loosely, because the scaling factor that someone obviously applied to the case of folic acid, 40 or 50 years ago, seems to be about 5 times what it should be. There are other issues with it, but I won't go into that.
This isn't really about inconvenience for me but about the ongoing issues with long-term trials in these areas. Researchers generally make choices on dosages that are totally arbitrary, etc. I would go through some of the recently-publicized "megatrials" in these areas, but there are so many issues with the trials.
Here's a link to some of the basic data for "rat chow" or "mouse chow" conversions: (http://www.who.int/entity/ipcs/food/jecfa/en/tox_guidelines.pdf). It's a document from the World Health Organization because it's tedious and difficult to find references on these topics. These conversion factors, on the second-to-last page, agree approximately with the ones I applied in the "folic acid paper." But I read through parts of some articles on "allometric," between-species (interspecific) scaling of pharmacokinetic parameters, in the current literature, and this is still a big area of controversy. The problems related to the scaling of pharmacokinetic data for xenobiotics (drugs) haven't been resolved, by any means, and drugs are arguably much more neatly metabolized than nutrients like folic acid. Drugs given orally tend to not be extensively metabolized outside of the liver or kidneys, and there's a limited number of variables. The situation is more complicated, I think, for nutrients, given that physiological substances can be taken up and metabolized by most cells in the body. I'll try to make sense of some of those types of articles in relation to things like magnesium, for example. The intracellular magnesium concentrations, in red blood cells, do not correlate well with either therapeutic effects, such as fasting blood sugar or whatever other variable, or with the change in serum magnesium, etc.
I don't want to complain, given that I have the entire world of scientific knowledge at my fingertips. And I'm aware of the equations for rats and mice and other animals, converting mg/kg diet into mg/kg body weight. I'm also aware that a dosage for a rat or a mouse tends to be divided by 10, given that rodents have higher metabolic rates and higher surface-area-to-volume ratios than humans and different ratios of organs to body weights. But in the folic acid paper, I didn't do the one-tenth scaling (that's the least sophisticated and most crude rule of thumb for interspecific, or between-species, scaling). This was partly because some of the articles comparing dosages between rodents and humans didn't do the scaling, and I also wanted to show that the scaling doesn't seem to work very well. Anyway, I'll try to post some of these basic "mg per kg rat chow" calculations for some of the articles. I think maybe the scaling calculations should be applied more loosely, because the scaling factor that someone obviously applied to the case of folic acid, 40 or 50 years ago, seems to be about 5 times what it should be. There are other issues with it, but I won't go into that.
This isn't really about inconvenience for me but about the ongoing issues with long-term trials in these areas. Researchers generally make choices on dosages that are totally arbitrary, etc. I would go through some of the recently-publicized "megatrials" in these areas, but there are so many issues with the trials.
Here's a link to some of the basic data for "rat chow" or "mouse chow" conversions: (http://www.who.int/entity/ipcs/food/jecfa/en/tox_guidelines.pdf). It's a document from the World Health Organization because it's tedious and difficult to find references on these topics. These conversion factors, on the second-to-last page, agree approximately with the ones I applied in the "folic acid paper." But I read through parts of some articles on "allometric," between-species (interspecific) scaling of pharmacokinetic parameters, in the current literature, and this is still a big area of controversy. The problems related to the scaling of pharmacokinetic data for xenobiotics (drugs) haven't been resolved, by any means, and drugs are arguably much more neatly metabolized than nutrients like folic acid. Drugs given orally tend to not be extensively metabolized outside of the liver or kidneys, and there's a limited number of variables. The situation is more complicated, I think, for nutrients, given that physiological substances can be taken up and metabolized by most cells in the body. I'll try to make sense of some of those types of articles in relation to things like magnesium, for example. The intracellular magnesium concentrations, in red blood cells, do not correlate well with either therapeutic effects, such as fasting blood sugar or whatever other variable, or with the change in serum magnesium, etc.
Saturday, December 27, 2008
Stereochemistry of Reduced Folates; Potential for Hypersensitivity Reactions
Putting this up here will help me keep the issues, with the chiral carbons in reduced folates, straight. The problem with prescription leucovorin (folinic acid) used to be (and still is in the U.S.) that it was provided as racemic leucovorin (50 percent L-leucovorin, which is (6S)-folinic acid or levoleucovorin and is the biologically-identical diastereomer, and 50 percent D-leucovorin, which is (6R)-folinic acid, the unnatural diastereomer). Now, in Europe, I think, L-leucovorin, the purified diastereomer, is available. But the problem with the use of an unnatural diastereomer is that it's biologically inactive or less active and could potentially cause a folate binding protein to become immunogenic. I think this hasn't ever been demonstrated, and I bet that the hypersensitivity responses to leucovorin are the result of allergenic substances in the vehicle used for intravenous injections. Actually, here's an abstract from a conference that talks about hypersensitivity reactions to oral leucovorin (http://www.nursinglibrary.org/Portal/main.aspx?pageid=4024&pid=18574). I have my doubts about those hypersensitivity reactions to oral leucovorin. This abstract (I'm posting the google scholar search, because the url of the direct link is 50 pages long: http://scholar.google.com/scholar?q=%22Two+cases+of+allergy+to+leucovorin%22&hl=en&lr=) says that the patients were able to tolerate i.v. leucovorin after taking leucovorin orally, to induce oral tolerance (I think oral tolerance is mediated by IgA production in the intestinal tract). This is the type of thing I'm talking about, but this article basically "exonerates" leucovorin, at least in this specific case, by showing that this bizarre allergy was actually to folic acid (which has only one chiral carbon on its L-glutamic acid moiety):
http://www.ncbi.nlm.nih.gov/pubmed/10932085
That article is bizarre, because folic acid has no chiral carbons on its pterin moiety. So the person would be allergic to all folic acid, including folic acid from foods. But then the authors say that folate polyglutamates in food would not have the potential to become immunogenic? I don't think all the folates in food exist as polyglutamates. I read through that article, and the person developed a skin rash, like hives. If you develop hives or skin itching, reliably, following anything, I would talk to your doctor. There tends to be a sign of that sort of idiosyncratic reaction. Let me see if I can find a reference showing that not all naturally-occurring folates are polyglutamylated. Well, I don't even need to, because folic acid that's added to tons of different foods, as fortified folic acid, is definitely not polyglutamylated. The concern with reduced folates is not the degree of polyglutamylation but with the unnatural geometry around the chiral carbon.
I think there would be some kind of sign of a hypersensitivity reaction to something like methylfolate, though, because those types of allergies, to drugs or other compounds covalently bound to plasma proteins or cellular proteins, tend to cause allergic symptoms like shortness of breath, hives, itching, etc. But, as an example, I saw this article saying that castor oil in suspensions of phylloquinone (vitamin K1), used only to treat overanticoagulation due to warfarin, turned out to be the cause of hypersensitivity reactions to the injections. But it's still worth watching for those symptoms.
But that's the rationale for using only (6S)-5-methylfolate (the "L-methylfolate" diastereomer), to avoid those issues. L-methylfolate is actually available over-the-counter now, and I don't want to mention the brand names. But that's the way things work these days, that one pharmaceutical company will sell a preparation to the industry as a whole. It's complex to separate diastereomers, and one wouldn't want a product with impurities, etc. There would probably still be advantages to getting it by prescription, and a person should always talk to his or her doctor about these things. But I saw that someone was selling racemic leucovorin over-the-counter, and that's why I mention the potential pitfalls with these things. The preparation of L-methylfolate containing the purified diastereomer apparently contains less than 1 percent of D-5-methylfolate. There's a theoretical possibility for some sort of sensitization, like that, with a tiny amount of D-5-methylfolate, but I doubt it would be an issue. If it did occur, I think there would be symptoms of it that would alert you and let a person talk to his or her doctor.
I don't feel like talking about the fact that naturally-occurring, N10-substituted, reduced folates have a 6R conformation around carbon-6 (and this conformational change occurs, during the enzymatic reaction, without a true chiral inversion, from what I can tell), and so that's another issue that makes the stereochemistry confusing here.
http://www.ncbi.nlm.nih.gov/pubmed/10932085
That article is bizarre, because folic acid has no chiral carbons on its pterin moiety. So the person would be allergic to all folic acid, including folic acid from foods. But then the authors say that folate polyglutamates in food would not have the potential to become immunogenic? I don't think all the folates in food exist as polyglutamates. I read through that article, and the person developed a skin rash, like hives. If you develop hives or skin itching, reliably, following anything, I would talk to your doctor. There tends to be a sign of that sort of idiosyncratic reaction. Let me see if I can find a reference showing that not all naturally-occurring folates are polyglutamylated. Well, I don't even need to, because folic acid that's added to tons of different foods, as fortified folic acid, is definitely not polyglutamylated. The concern with reduced folates is not the degree of polyglutamylation but with the unnatural geometry around the chiral carbon.
I think there would be some kind of sign of a hypersensitivity reaction to something like methylfolate, though, because those types of allergies, to drugs or other compounds covalently bound to plasma proteins or cellular proteins, tend to cause allergic symptoms like shortness of breath, hives, itching, etc. But, as an example, I saw this article saying that castor oil in suspensions of phylloquinone (vitamin K1), used only to treat overanticoagulation due to warfarin, turned out to be the cause of hypersensitivity reactions to the injections. But it's still worth watching for those symptoms.
But that's the rationale for using only (6S)-5-methylfolate (the "L-methylfolate" diastereomer), to avoid those issues. L-methylfolate is actually available over-the-counter now, and I don't want to mention the brand names. But that's the way things work these days, that one pharmaceutical company will sell a preparation to the industry as a whole. It's complex to separate diastereomers, and one wouldn't want a product with impurities, etc. There would probably still be advantages to getting it by prescription, and a person should always talk to his or her doctor about these things. But I saw that someone was selling racemic leucovorin over-the-counter, and that's why I mention the potential pitfalls with these things. The preparation of L-methylfolate containing the purified diastereomer apparently contains less than 1 percent of D-5-methylfolate. There's a theoretical possibility for some sort of sensitization, like that, with a tiny amount of D-5-methylfolate, but I doubt it would be an issue. If it did occur, I think there would be symptoms of it that would alert you and let a person talk to his or her doctor.
I don't feel like talking about the fact that naturally-occurring, N10-substituted, reduced folates have a 6R conformation around carbon-6 (and this conformational change occurs, during the enzymatic reaction, without a true chiral inversion, from what I can tell), and so that's another issue that makes the stereochemistry confusing here.
Noradrenergic Regulation of BDNF Release; Problems With Interpreting Data On BDNF and Neurogenesis in the Hippocampus
I was going to mention that the changes in BDNF levels in the amygdala and hippocampus (increase in the amygdala and decrease in hippocampus) in that article (http://www.ncbi.nlm.nih.gov/pubmed/18614692), in response to folate depletion in normal mice, could, when viewed alongside the increases in noradrenaline levels in the hippocampus, be the result of a chronically-increased firing rate or activation of noradrenergic neurons in the locus ceruleus. Noradrenergic activity is an important factor that regulates BDNF release in different parts of the brain. The mice in that study were showing evidence of "behavioral despair" and anxiety, and increases in noradrenaline release in the amygdala and hippocampus, in response to exaggerated increases in the firing rates of neurons in the locus ceruleus that project to the amygdala and hippocampus, would be consistent with that type of chronic stress/HPA axis hyperactivation paradigm. Here's a really interesting article showing that BDNF is especially-strongly regulated by noradrenergic activity (more specifically, beta-adrenoreceptor activation by noradrenaline or adrenaline) in response to exercise and other stimuli:
http://www.ncbi.nlm.nih.gov/pubmed/12759116
A chronic increase in the firing rates of neurons in the locus ceruleus could decrease beta-adrenoreceptor responses in one part of the brain and increase them in another (explaining the site-specific effects on BDNF release) or something like that. I think BDNF release, per se, is sort of too dynamic to interpret easily, and any drug (or stimulus like physical exercise) that increases noradrenergic transmission in the brain can transiently elevate (or otherwise influence) BDNF release or expression. Here's an article claiming that doses of caffeine, in animals, that are comparable to those consumed by humans (2-3 cups of coffee) increase BDNF levels in the hippocampus, but I wonder if those effects would persist for more than a week or two:
http://www.ncbi.nlm.nih.gov/pubmed/18620014
The increases in the proliferation of hippocampal neuronal progenitor cells ("neurogenesis") in response to exercise [as mentioned briefly in the article I cited above (http://www.ncbi.nlm.nih.gov/pubmed/12759116)] or antidepressants or other drugs tend to be not that significant or meaningful, in a lot of cases. Here's an article that gets at that controversy: (http://www.ncbi.nlm.nih.gov/pubmed/16889797). This doesn't mean neurogenesis is unimportant in the adult brain, but it just means that a lot of the proliferating neuronal progenitor cells don't differentiate into new neurons. And a lot of the studies on hippocampal volume in relation to depression or other conditions are very inconsistent, and there tends to be significant variation, between individuals, in the so-called "normal" hippocampal volume. When researchers have used MRIs to try to correlate structural brain changes with neuropsychiatric symptoms, the results have very frequently not ended up being reproducible. Also, an increase in BDNF is sort of a catch-all indicator of neuronal activity and doesn't really tell you much. Here's an interesting article that looks critically at the problems with a simplistic model for the role that BDNF release may have in psychiatric conditions: (http://www.ncbi.nlm.nih.gov/pubmed/17700574).
http://www.ncbi.nlm.nih.gov/pubmed/12759116
A chronic increase in the firing rates of neurons in the locus ceruleus could decrease beta-adrenoreceptor responses in one part of the brain and increase them in another (explaining the site-specific effects on BDNF release) or something like that. I think BDNF release, per se, is sort of too dynamic to interpret easily, and any drug (or stimulus like physical exercise) that increases noradrenergic transmission in the brain can transiently elevate (or otherwise influence) BDNF release or expression. Here's an article claiming that doses of caffeine, in animals, that are comparable to those consumed by humans (2-3 cups of coffee) increase BDNF levels in the hippocampus, but I wonder if those effects would persist for more than a week or two:
http://www.ncbi.nlm.nih.gov/pubmed/18620014
The increases in the proliferation of hippocampal neuronal progenitor cells ("neurogenesis") in response to exercise [as mentioned briefly in the article I cited above (http://www.ncbi.nlm.nih.gov/pubmed/12759116)] or antidepressants or other drugs tend to be not that significant or meaningful, in a lot of cases. Here's an article that gets at that controversy: (http://www.ncbi.nlm.nih.gov/pubmed/16889797). This doesn't mean neurogenesis is unimportant in the adult brain, but it just means that a lot of the proliferating neuronal progenitor cells don't differentiate into new neurons. And a lot of the studies on hippocampal volume in relation to depression or other conditions are very inconsistent, and there tends to be significant variation, between individuals, in the so-called "normal" hippocampal volume. When researchers have used MRIs to try to correlate structural brain changes with neuropsychiatric symptoms, the results have very frequently not ended up being reproducible. Also, an increase in BDNF is sort of a catch-all indicator of neuronal activity and doesn't really tell you much. Here's an interesting article that looks critically at the problems with a simplistic model for the role that BDNF release may have in psychiatric conditions: (http://www.ncbi.nlm.nih.gov/pubmed/17700574).
Increases in Iron "Content" or Deposition in Folate Depletion; Folate Depletion Impairs Neurogenesis in Adult Mice
These are interesting articles. Both articles show that folate depletion impairs iron utilization, and this isn't that surprising. Heme formation in erythrocyte precursors is coupled to DNA replication and the cell cycle (I forget the details), and folate repletion restores DNA replication and cell division. But there could be other mechanisms. One interesting thing, something that makes me wonder if that simple explanation is adequate, is that the liver iron content was doubled in response to folate deficiency in this article:
http://www.ncbi.nlm.nih.gov/pubmed/1571542
Again, that could just be that less iron is being used for red blood cell formation in folate deficiency, but the magnitude of the effect is surprising to me. I can't get the full text of this article, at this point, but the magnitude of the increase in red blood cell ferritin (this is not the same as serum ferritin but is analogous to "tissue ferritin," the type(s) of ferritin chains that store iron in the liver or astrocytes or some other tissue) in folate or B12 deficiencies is really large. It's elevated like 60- or 70-fold, and the authors compare it to hemochromatosis:
http://www.ncbi.nlm.nih.gov/pubmed/6505636
An increase in the liver iron (bound to ferritin) content or red blood cell ferritin is not, by itself, evidence of hemochromatosis (that's a genetic difference that causes inappropriate increases in the intestinal absorption of iron), but elevations in the levels of tissue ferritin are not very desirable in the long term.
I suppose this type of thing could have some relevance to restless legs syndrome (RLS), given that both folic acid and iron have sometimes been used, albeit not very effectively, to treat RLS. But I don't know what the mechanism would be. Maybe folate repletion increases the mtDNA content or has another metabolic effect and improves iron utilization in neurons in the basal ganglia, in a way that improves the RLS symptoms. But reduced folates or folic acid might also have no effect on iron utilization by cells in the brain and no consistent effects on RLS symptoms. The folic acid might have just influenced the firing rates of dopaminergic neurons by some complicated mechanism, independently of any effect on iron utilization.
This is a really recent article that's interesting and shows that folate depletion increases amygdalar brain-derived neurotrophic factor (BDNF) (an effect that the authors say is associated with increased anxiety) and somewhat selectively damages dopaminergic neurons in mice lacking a repair enzyme for misincorporated uracil in DNA:
http://www.ncbi.nlm.nih.gov/pubmed/18614692
I'm getting off-topic with this, but this is a really good article. That's sort of like amplifying the effect of folate depletion (to use folate depletion in mice lacking uracil-DNA glycosylase activity) and is similar to the type of thing you'd expect to occur in response to the combination of folate depletion and ischemia. That article is interesting, too, because it shows that folate depletion in normal mice reduces the turnover of serotonin and increases amygdalar and hippocampal noradrenaline. That (along with the increase in amygdalar BDNF levels, according to the authors) could explain some of the effects of methylfolate in psychiatric conditions, and the authors mention that. Increases in the steady-state noradrenaline levels in those parts of the brain could be sort of loosely consistent with the activation of the stress response, producing anxiety in association with folate depletion. The noradrenergic neurons in the locus ceruleus that project to those areas increase their firing rates in response to stressors and anticipatory stress. Folate depletion, in that above article, also decreased neurogenesis in one part of the hippocampus (the dentate gyrus) (http://stemcells.alphamedpress.org/cgi/content/abstract/stemcells.2008-0732v1) that retains the capacity for neurogenesis into adulthood, and that decrease in cell proliferation was accompanied by hippocampal degeneration and decrease in BDNF levels. The folate depletion also interfered with spatial learning and produced "despair-like" behaviors.
http://www.ncbi.nlm.nih.gov/pubmed/1571542
Again, that could just be that less iron is being used for red blood cell formation in folate deficiency, but the magnitude of the effect is surprising to me. I can't get the full text of this article, at this point, but the magnitude of the increase in red blood cell ferritin (this is not the same as serum ferritin but is analogous to "tissue ferritin," the type(s) of ferritin chains that store iron in the liver or astrocytes or some other tissue) in folate or B12 deficiencies is really large. It's elevated like 60- or 70-fold, and the authors compare it to hemochromatosis:
http://www.ncbi.nlm.nih.gov/pubmed/6505636
An increase in the liver iron (bound to ferritin) content or red blood cell ferritin is not, by itself, evidence of hemochromatosis (that's a genetic difference that causes inappropriate increases in the intestinal absorption of iron), but elevations in the levels of tissue ferritin are not very desirable in the long term.
I suppose this type of thing could have some relevance to restless legs syndrome (RLS), given that both folic acid and iron have sometimes been used, albeit not very effectively, to treat RLS. But I don't know what the mechanism would be. Maybe folate repletion increases the mtDNA content or has another metabolic effect and improves iron utilization in neurons in the basal ganglia, in a way that improves the RLS symptoms. But reduced folates or folic acid might also have no effect on iron utilization by cells in the brain and no consistent effects on RLS symptoms. The folic acid might have just influenced the firing rates of dopaminergic neurons by some complicated mechanism, independently of any effect on iron utilization.
This is a really recent article that's interesting and shows that folate depletion increases amygdalar brain-derived neurotrophic factor (BDNF) (an effect that the authors say is associated with increased anxiety) and somewhat selectively damages dopaminergic neurons in mice lacking a repair enzyme for misincorporated uracil in DNA:
http://www.ncbi.nlm.nih.gov/pubmed/18614692
I'm getting off-topic with this, but this is a really good article. That's sort of like amplifying the effect of folate depletion (to use folate depletion in mice lacking uracil-DNA glycosylase activity) and is similar to the type of thing you'd expect to occur in response to the combination of folate depletion and ischemia. That article is interesting, too, because it shows that folate depletion in normal mice reduces the turnover of serotonin and increases amygdalar and hippocampal noradrenaline. That (along with the increase in amygdalar BDNF levels, according to the authors) could explain some of the effects of methylfolate in psychiatric conditions, and the authors mention that. Increases in the steady-state noradrenaline levels in those parts of the brain could be sort of loosely consistent with the activation of the stress response, producing anxiety in association with folate depletion. The noradrenergic neurons in the locus ceruleus that project to those areas increase their firing rates in response to stressors and anticipatory stress. Folate depletion, in that above article, also decreased neurogenesis in one part of the hippocampus (the dentate gyrus) (http://stemcells.alphamedpress.org/cgi/content/abstract/stemcells.2008-0732v1) that retains the capacity for neurogenesis into adulthood, and that decrease in cell proliferation was accompanied by hippocampal degeneration and decrease in BDNF levels. The folate depletion also interfered with spatial learning and produced "despair-like" behaviors.
References on Methylmercury Toxicity In Relation to Jeremy Piven
This will help me get a better handle on this, to look at these again. The full texts of many journals going back to the '60s and '70s became available online through University-based access, and this makes things more convenient. I have photocopies of these somewhere, but I actually just got these online. Here's the reference for the first article:
http://pubs.acs.org/doi/abs/10.1021/ic00177a016
That's a great article, and the authors say that the complexes of the methylmercury ion (CH3-Hg+) with different sulfur- or selenium-containing ligands (a ligand is just something that binds to the CH3-Hg+ ion) last about 0.01 seconds. That's the kinetic side, the kinetic lability. But each complex (an example of a complex is CH3-Hg-SCH2CH2CH(+NH3)CO2-, which is "methylmercury cysteinate") that lasts about 0.01 seconds is thermodynamically stable, meaning that the binding to sulfur or selenium (sulfur and selenium have very similar chemical properties) is strongly "favored." When someone talks about "stability," this usually refers, in the absence of a more specific statement, to the thermodynamic side of chemistry and is only talking about equilibrium conditions (F.A. Cotton and G. Wilkinson. Advanced Inorganic Chemistry, 5th Ed.). (I'm not saying that book is something to buy, but I have an old copy. It's a great book, and that's where I got these references on methylmercury.) A quantitative measurement of thermodynamic stability is the formation constant for an organometallic complex like methylmercury cysteinate (cysteine is an amino acid that's in proteins and in glutathione):
CH3-Hg+ (CH3-Hg+) + -SCH2CH2CH(+NH3)CO2- (cysteinate) <---> CH3-Hg-SCH2CH2CH(+NH3)CO2- (CH3-Hg-cysteinate)
K1 = the formation constant = [CH3-Hg-cysteinate]/([CH3-Hg+][cysteinate]) = 10(14)-10(18) (10(14) = 10 to the 14th power--I'm combining data from the three sources, here)
Those really large numbers mean that, at equilibrium, almost all of the methylmercury, in a water solution (like a cup of water) at any one time would be the methylmercury cysteinate complex (starting with only cysteinate and methylmercury). The problem is that it never really gets to equilibrium in the body, and that's the reason the kinetic side dominates the chemistry in the body (the reason the complexes last 0.01 seconds and allow a methylmercury ion to migrate around the brain, etc., for a long time). This is partly because, in the body, there are many more sulfur atoms (and a few selenium atoms) than mercury atoms (as part of methylmercury).
Chemical kinetics is still hard for me to get my head around, but here's one example of kinetic stability in the face of thermodynamic instability (the opposite of the situation with methylmercury, even though other types of complexes, not involving methylmercury, can be both thermodynamically and kinetically stable or labile, etc.). This helps me understand it. If you pour oil into a glass of water, there will be many small droplets of oil that will coalesce into a large droplet and then into an "oil phase" (a layer) on top of the water (the equilibrium state). But when a food manufacturer makes an oil-in-water emulsion, the droplets of oil can be microscopic and suspended in the water. An emulsion is thermodynamically unstable, meaning that the oil and water phases are separating but are doing so very slowly and are not at equilibrium. In some emulsions, the separation of the oil and water might take a year. But the emulsion is kinetically stable because it might take that year for a lot of the droplets to "bind," or coalesce, with one another. It's the "opposite" with methylmercury. In any case, this is probably not of interest to most people, but this helps me. Here's the other article:
http://pubs.acs.org/doi/abs/10.1021/om00124a008
Here are some articles on the incident with the poisoning and death, from dimethylmercury, of a Dartmouth chemistry professor. This is a really disturbing story [dimethylmercury is much more toxic than methylmercury, but I think many of the mechanisms of toxicity are similar or the same (meaning that the difference is a matter of degree of toxicity, on a per-unit-mass basis)], but I think it's important to be aware of how toxic organic mercury compounds are. They're more toxic, on a per-unit-mass basis, than lead and just behave in an extremely insidious way. I think there's a tendency to associate poisoning with acute illness or vomiting or other overt symptoms, but that's not the way it works with organic mercury toxicity:
http://collaborations.denison.edu/naosmm/topics/dartmouth.html
http://content.nejm.org/cgi/content/full/338/23/1672
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/9614258?dopt=Abstract)
http://pubs.acs.org/doi/abs/10.1021/ic00177a016
That's a great article, and the authors say that the complexes of the methylmercury ion (CH3-Hg+) with different sulfur- or selenium-containing ligands (a ligand is just something that binds to the CH3-Hg+ ion) last about 0.01 seconds. That's the kinetic side, the kinetic lability. But each complex (an example of a complex is CH3-Hg-SCH2CH2CH(+NH3)CO2-, which is "methylmercury cysteinate") that lasts about 0.01 seconds is thermodynamically stable, meaning that the binding to sulfur or selenium (sulfur and selenium have very similar chemical properties) is strongly "favored." When someone talks about "stability," this usually refers, in the absence of a more specific statement, to the thermodynamic side of chemistry and is only talking about equilibrium conditions (F.A. Cotton and G. Wilkinson. Advanced Inorganic Chemistry, 5th Ed.). (I'm not saying that book is something to buy, but I have an old copy. It's a great book, and that's where I got these references on methylmercury.) A quantitative measurement of thermodynamic stability is the formation constant for an organometallic complex like methylmercury cysteinate (cysteine is an amino acid that's in proteins and in glutathione):
CH3-Hg+ (CH3-Hg+) + -SCH2CH2CH(+NH3)CO2- (cysteinate) <---> CH3-Hg-SCH2CH2CH(+NH3)CO2- (CH3-Hg-cysteinate)
K1 = the formation constant = [CH3-Hg-cysteinate]/([CH3-Hg+][cysteinate]) = 10(14)-10(18) (10(14) = 10 to the 14th power--I'm combining data from the three sources, here)
Those really large numbers mean that, at equilibrium, almost all of the methylmercury, in a water solution (like a cup of water) at any one time would be the methylmercury cysteinate complex (starting with only cysteinate and methylmercury). The problem is that it never really gets to equilibrium in the body, and that's the reason the kinetic side dominates the chemistry in the body (the reason the complexes last 0.01 seconds and allow a methylmercury ion to migrate around the brain, etc., for a long time). This is partly because, in the body, there are many more sulfur atoms (and a few selenium atoms) than mercury atoms (as part of methylmercury).
Chemical kinetics is still hard for me to get my head around, but here's one example of kinetic stability in the face of thermodynamic instability (the opposite of the situation with methylmercury, even though other types of complexes, not involving methylmercury, can be both thermodynamically and kinetically stable or labile, etc.). This helps me understand it. If you pour oil into a glass of water, there will be many small droplets of oil that will coalesce into a large droplet and then into an "oil phase" (a layer) on top of the water (the equilibrium state). But when a food manufacturer makes an oil-in-water emulsion, the droplets of oil can be microscopic and suspended in the water. An emulsion is thermodynamically unstable, meaning that the oil and water phases are separating but are doing so very slowly and are not at equilibrium. In some emulsions, the separation of the oil and water might take a year. But the emulsion is kinetically stable because it might take that year for a lot of the droplets to "bind," or coalesce, with one another. It's the "opposite" with methylmercury. In any case, this is probably not of interest to most people, but this helps me. Here's the other article:
http://pubs.acs.org/doi/abs/10.1021/om00124a008
Here are some articles on the incident with the poisoning and death, from dimethylmercury, of a Dartmouth chemistry professor. This is a really disturbing story [dimethylmercury is much more toxic than methylmercury, but I think many of the mechanisms of toxicity are similar or the same (meaning that the difference is a matter of degree of toxicity, on a per-unit-mass basis)], but I think it's important to be aware of how toxic organic mercury compounds are. They're more toxic, on a per-unit-mass basis, than lead and just behave in an extremely insidious way. I think there's a tendency to associate poisoning with acute illness or vomiting or other overt symptoms, but that's not the way it works with organic mercury toxicity:
http://collaborations.denison.edu/naosmm/topics/dartmouth.html
http://content.nejm.org/cgi/content/full/338/23/1672
(pubmed: http://www.ncbi.nlm.nih.gov/pubmed/9614258?dopt=Abstract)
Uridine, Orotate, and Mitochondrial Damage in Liver Disease
One model of fatty liver disease, in which damage to mitochondria in hepatocytes plays a big role (a representative article: http://www.ncbi.nlm.nih.gov/pubmed/15489566), is to administer orotate (the conjugate base of orotic acid) to animals. It's not clear why it's so toxic, and it isn't clear what the main mechanisms are. It's mind-bending, but I think it's become pretty clear, in all the vast "orotate-induced fatty liver" literature, that understanding orotate metabolism and the de novo pyrimidine biosynthetic pathway is likely to be important for understanding liver disease. Fatty liver disease is very common, and I forget what the percentage of the population is that "has" it. It usually has to be diagnosed with transabdominal ultrasound and can't reliably be diagnosed with blood tests of liver enzymes.
Orotate is an intermediate in the de novo formation of uridine, from which all other pyrimidines are made. I saw one article showing that orotate directly inhibits phosphatidylcholine formation at both steps, but I can't find it now. It's interesting that exogenous uridine inhibits orotate formation by inhibiting carbamoyl-phosphate synthetase activity (via UDP, UTP, UDP-glucose and other UDP-sugars) and, via UMP, orotate monophosphate decarboxylase (http://www.ncbi.nlm.nih.gov/pubmed/9357323). I'm thinking that may be one way exogenous uridine can work in treating fatty liver disease due to some drugs (http://www.ncbi.nlm.nih.gov/pubmed/17187420). It's mainly been used in people taking antiretrovirals for HIV and experience lactic acidosis due to liver damage or broader toxicity. It's hard to organize these references, but I know there are lots of human studies. Here's another one in animals: (http://www.ncbi.nlm.nih.gov/pubmed/18163507). The nucleoside analogue fialuridine was used in a trial to treat hepatitis B infections and caused liver failure in a striking manner, and I think it was never researched after that. The authors of this article think that fialuridine was incorporated into mtDNA and caused mitochondrial toxicity (http://content.nejm.org/cgi/content/full/333/17/1099), and exogenous uridine didn't help anyone in that trial (probably partly because the damage was too great). This type of thing is interesting with regard to mitochondrial functioning, because the same concepts can apply to disease states outside the liver. Another big mechanism whose dysregulation can play a big role in liver disease is the regulation of the directionality of S-adenosylhomocysteine hydrolase and the activities of SAM-e-dependent methyltransferases (http://www.fasebj.org/cgi/content/full/16/1/15). The relationship of orotate to nucleotide metabolism is interesting, and I'm trying to learn about the relationships of that to folic acid metabolism.
Orotate is an intermediate in the de novo formation of uridine, from which all other pyrimidines are made. I saw one article showing that orotate directly inhibits phosphatidylcholine formation at both steps, but I can't find it now. It's interesting that exogenous uridine inhibits orotate formation by inhibiting carbamoyl-phosphate synthetase activity (via UDP, UTP, UDP-glucose and other UDP-sugars) and, via UMP, orotate monophosphate decarboxylase (http://www.ncbi.nlm.nih.gov/pubmed/9357323). I'm thinking that may be one way exogenous uridine can work in treating fatty liver disease due to some drugs (http://www.ncbi.nlm.nih.gov/pubmed/17187420). It's mainly been used in people taking antiretrovirals for HIV and experience lactic acidosis due to liver damage or broader toxicity. It's hard to organize these references, but I know there are lots of human studies. Here's another one in animals: (http://www.ncbi.nlm.nih.gov/pubmed/18163507). The nucleoside analogue fialuridine was used in a trial to treat hepatitis B infections and caused liver failure in a striking manner, and I think it was never researched after that. The authors of this article think that fialuridine was incorporated into mtDNA and caused mitochondrial toxicity (http://content.nejm.org/cgi/content/full/333/17/1099), and exogenous uridine didn't help anyone in that trial (probably partly because the damage was too great). This type of thing is interesting with regard to mitochondrial functioning, because the same concepts can apply to disease states outside the liver. Another big mechanism whose dysregulation can play a big role in liver disease is the regulation of the directionality of S-adenosylhomocysteine hydrolase and the activities of SAM-e-dependent methyltransferases (http://www.fasebj.org/cgi/content/full/16/1/15). The relationship of orotate to nucleotide metabolism is interesting, and I'm trying to learn about the relationships of that to folic acid metabolism.
Subscribe to:
Comments (Atom)