Saturday, January 31, 2009

Propionyl-L-Carnitine, Beta-Adrenoreceptor Density, and Adenylate Cyclase Activity

This article is interesting [Sethi et al., 2004: (http://www.ncbi.nlm.nih.gov/pubmed/15201623)], and the authors found that propionyl-L-carnitine (PLC) can enhance the beta1-adrenoreceptor density in the hearts of rats, in which experimental heart failure had been induced, and increase the beta-adrenoreceptor activation-induced increases in adenylate cyclase activity in the hearts of the rats. The authors discussed the fact that ketones, such as acetoacetate, had been shown to produce similar effects. PLC has been researched a lot in the treatment of intermittent claudication (IC) due to peripheral arterial disease (PAD) (IC is symptomatic PAD that produces pain, usually in the legs or ankles, when a person walks or exercises). There's actually a considerable amount of research on the effects of ketones (i.e. acetoacetate and beta-hydroxybutyrate) on cognitive functioning and in neuroprotection, but some of the methods used to elevate ketone levels (or the conditions associated with elevated ketone levels) are potentially harmful. The way PLC works is sort of not entirely clear, but it's anaplerotic (http://hardcorephysiologyfun.blogspot.com/2009/01/coenzyme-sequestration.html) and can maintain the activities of the tricarboxylic acid cycle (TCA cycle) enzymes (and thereby maintain ATP levels) under conditions of metabolic stress, in particular. Biotin is required for propionyl-CoA carboxylase activity, and methylcobalamin/vitamin B12 (via their conversion into 5'-deoxyadenosylcobalamin) is required for methylmalonyl-CoA mutase activity. Those enzymes' activities are required for propionyl-L-carnitine to work properly. Puchowicz et al. (2008) [Puchowicz et al., 2008: (http://www.nature.com/jcbfm/journal/v28/n12/abs/jcbfm200879a.html)] found that administering the ketone beta-hydroxybutyrate reduced the postischemic infarct volume (damaged part of the brain) in an experimental model of stroke in rats. The authors suggested that the mechanism of neuroprotection by beta-hydroxybutyrate might be due to an anaplerotic effect (maintaining mitochondrial ATP production, essentially, as discussed above) and be similar to the cardioprotective effects of PLC. Acetyl-L-carnitine (ALCAR) was tested as a neuroprotective in Alzheimer's disease, but the trials were discontinued in phase II or III (phase III, as far as I know). My opinion, in view of the research showing the acetyl- group donation by ALCAR, is that ALCAR is more cholinergic than PLC [which, in view of articles such as this one, by Puchowicz et al. (2008), may be more "adrenergic"], and that might have implications for conditions associated with a disturbed balance of cholinergic and adrenergic neurotransmission (in major depression, for example, there tends, very generally, to be a deficit in noradrenergic/adrenergic neurotransmission and overactivity of cholinergic neurotransmission in the brain). There's still not as much research on the effects of PLC on the brain, but I think there could be some potential in that area. BDNF production in some parts of the brain, in response to physical exercise, for example, is dependent on beta1-adrenoreceptor activation [Ivy et al., 2003: (http://www.ncbi.nlm.nih.gov/pubmed/12759116)]. That's pretty well-known to be true, but the extent to which that BDNF exerts any consistent effects is less clear. ALCAR, PLC, and unesterified, free L-carnitine are definitely not equivalent. Research has shown that they can have very different effects. I think there's a tendency for the research to get stuck on the concept that something is good for one condition and for nothing else. PLC, in my opinion, has tended to be viewed as in some way "selective" for heart-related conditions, but, in my opinion, there's no real basis for thinking PLC would act selectively in the heart. Vermeulen et al. (2004) [Vermeulen et al., 2004: (http://www.psychosomaticmedicine.org/cgi/content/full/66/2/276) (http://www.ncbi.nlm.nih.gov/pubmed/15039515?dopt=Abstract)] cited research showing that carnitine and PLC (and carnitine esters, in general) can cross the blood-brain barrier and enter the brain. The authors seemed to assume that PLC was acting outside the brain, but I have no idea why they would assume that. Something that decreases "physical" fatigue could just as easily be expected to act mainly in the brain, in my opinion, given that exhaustion during voluntary, physical exercise may be partially produced, first, by "fatigue" in the brain (maybe because of something like astrocyte glycogen depletion or an increase in the lactate to pyruvate ratio in the brain). Astrocyte glycogen depletion is one explanation for the "need to sleep," meaning that scientists are not sure why people need sleep at all and think that depletion of "brain glycogen," which is largely in astrocytes, may be one explanation for "sleep" [Kong et al., 2002: (http://www.jneurosci.org/cgi/content/full/22/13/5581) (http://www.ncbi.nlm.nih.gov/pubmed/12097509?dopt=Abstract)].

I tried to find the original article on that, and the title of it is "Exercise begins and ends in the brain." It's in a European journal that evidently isn't indexed in Pubmed. Pubmed is a really strange search engine and has some...problems. It sounds like some people in exercise physiology journals are trying to scoff at that idea now, in the articles that have been published subsequently. They're falling back on the 1940's concept of the VO2max. Here are some articles discussing the problems with that concept, a concept that reminds me, in relation to protein intake, of the idea of "nitrogen balance" [Robergs, 2001: (http://faculty.css.edu/tboone2/asep/Robergs1Col.PDF); Carlson, 1995: (http://www.chestjournal.org/content/108/3/602.full.pdf+html) (http://www.ncbi.nlm.nih.gov/pubmed/7656603?dopt=Abstract)]. I shouldn't be so critical, but research in some areas of exercise physiology is bizarre. There isn't effective integration, into research on exercise physiology, of new information on neurobiology, and researchers keep testing things in elite athletes. In people with mitochondrial disorders, an understanding of ways to even measure the factors that utterly crippled people's capacities to exercise came extremely slowly. There's some necessity not to test things in people who are seriously ill, but many, many people, people who do not have isolated mtDNA mutations or inherited metabolic disorders, cannot exercise at all. I think the people outside the exercise physiology research field have probably done a lot of the research showing that things like PLC improve exercise tolerance in people with PAD and heart disease (http://scholar.google.com/scholar?num=100&hl=en&lr=&q=propionyl-L-carnitine+exercise), and I would be willing to bet that there are some crazy articles testing it in elite athletes and finding no results. I'm not sure why one would expect meaningful performance enhancements in finely-tuned athletes. I'd sort of worry about the whole concept of using something to enhance performance, but that's not really the idea. The idea is to build up a person's muscle mass to the point that his or her body can regulate basic functions, such as buffering plasma electrolyte levels and maintaining the blood sugar, by gluconeogenesis in the skeletal muscles, in the fasted state. Once the muscles of a person with PAD, for example, could do that, with guidance from his or her doctor, one might expect to see less of a need for something like PLC. Has there ever been a positive result in elite athletes? They're already in fantastic shape. I don't claim to know the way in which the research should be done, but, in my opinion, there are some issues with research in that area. I guess that's unrelated to the original topic.

Friday, January 30, 2009

Details on Nucleotides; Bioavailability Issues With Dosage Forms of Purine

I'm just going to put this information up here, because it's the type of thing that could be difficult to figure out and sort through. I have no financial interest whatsoever in any company or brand, and anyone who wants to make use of this information should obviously check with his or her doctor. Purines and pyrimidines are the type of thing that, if one were to attempt to evaluate their use in some sort of "proof of concept" trial, would probably end up being used as adjunctive treatments. I don't know if anyone will read or want to make use of this information, but I just thought I'd put it up here. It's just the sort of basic information, but these types of details would conceivably be difficult for a person to intuit, on his or her own, without some sort of background. To the extent that this is a real "blog," it would seem that the blog should provide *some* relevant information. The statements in this posting are just my opinions, and, throughout this posting, I've repeatedly identified them as being my opinions.

Renshaw et al. (2001) [Renshaw et al., 2001: (http://ajp.psychiatryonline.org/cgi/content/full/158/12/2048) (http://www.ncbi.nlm.nih.gov/pubmed/11729024)] found evidence that purines had been depleted from the brains of some people with major depression and suggested that the antidepressant effects of S-adenosylmethionine, which has been used for 30 or more years in large numbers of trials to treat various neurological and psychiatric conditions, may be the result of its conversion into adenosine in the brain. This is probably true, in my opinion, and I discussed, here (http://hardcorephysiologyfun.blogspot.com/2008/12/purines-and-pyrimidines-in.html), some of the almost-endless lines of evidence in support of that concept, in relation to research on purines. The issue is not just relevant for psychiatric research, given that S-adenosylmethionine has been suggested as a therapeutic agent in a variety of different neurological disorders. Much of the animal research (and some of the human research) has used parenteral or intramuscular SAM-e, however, and those dosage forms tend to have high bioavailability.

That concept has broad relevance, but the main problem with oral purine formulations is that, in my opinion, enteric coated dosage forms severely limit the bioavailability of purines. Enteric coatings are pH-sensitive coatings that are meant to allow the tablet to dissolve in the supposedly alkaline pH (pH greater than 7, which is the pH of water) of the duodenum and jejunum (the upper intestine, just past the stomach). The coatings are substances whose solubility is pH-sensitive, meaning that they would dissolve in a weak sodium bicarbonate solution, such as in the upper intestine (ideally), but not in an acidic solution (as in the stomach, in which the pH is 1-2 or 3, usually). The problem is that, in some people (if not many people), the pH never exceeds about 6 in the upper intestine. I don't have time to cite the article, but one article discusses the fact that the pH in the jejunum tends to range from something like 4.5 to 6.2 or something. The jejunal pH also decreases progressively throughout the day, with each meal, essentially, and may not reach a maximum again until the morning of the next day (in the "fasted" state). I'm surprised that some of the enteric coated SAM-e preparations have any bioavailability, but that's just my opinion or impression. Even assuming the enteric coating dissolves, the counterions of some SAM-e salts have pKa's of less than -2. I don't want to go into the details. This makes the enteric coating necessary, because the counterion would be protonated in a slightly acidic solution (esp. in the acidic environment of the stomach) and would be "outside" of the acid-base buffers. Something with a pKa of less than -2 is highly acidic. In the absence of an enteric coating, there would be the potential for acid-mediated damage to the stomach. But some of the counterions are protonated in crystalline form, as ion pairs with S-adenosylmethionine, and acidic drugs tend to slow tablet dissolution. There's some kind of "shell," or localized pH gradient, of acidity or something that exists around the dissolving tablet. I forget the term and the mechanism. This would be another concern, in my opinion. When one adds the additional facts that tablets in over-the-counter supplements can be rock hard and that there is no regulatory mechanism to impose dissolution standards on the supplement industry (discussed briefly here: http://hardcorephysiologyfun.blogspot.com/2009/01/copper-and-zinc-complexities-and.html), there is the potential for some real problems with bioavailability, in my opinion.

The problem is that the very slow release of a purine from a dosage form will tend to allow most or all of it to be converted into uric acid by intestinal epithelial cells, and this will prevent most of it from entering the portal venous blood and, following the uptake of a significant percentage of the purine, from the portal vein, by the liver, entering the systemic circulation. To enter the brain or another target tissue, a compound has to enter the systemic circulation at some point. Bioavailability essentially means availability to tissues in the "body" other than the liver or intestinal tract. Absorption is not the same thing as bioavailability. Something can be absorbed very slowly into the portal vein and be entirely taken up by the liver, before it reaches the systemic circulation. In an extreme case, something can have 100 percent absorption and zero bioavailability. This doesn't usually occur and doesn't occur, to that extreme extent, with some of the enteric-coated SAM-e preparations, given that there have been the positive results in all the trials. But researchers have expressed concerns in the literature, on many occasions, about the bioavailability of some dosage forms. As purines go, the preponderance of evidence, in my opinion, indicates that free purine nucleotide monophosphates or other purines that are not enteric-coated have relatively high and "desirable" degrees of bioavailability.

Some oral adenosine triphosphate disodium formulations are now enteric-coated as well, however, and, in contrast to the situation involving some of those counterions in some SAM-e preparations, this makes no sense at all, in my opinion. Supposedly the intent, as I understand it, is to protect the ATP from presystemic or pre-absorptive dephosphorylation or from some vague degradative process in the stomach (?), but the animal studies that have used non-enteric-coated infusions into the jejunum have shown more than adequate bioavailability (1,000-fold elevations in adenosine nucleotide levels in the portal vein, post-infusion, for example, implying availability to the systemic circulation) [here's one of the animal studies: Kichenin et al., 2000: (http://jpet.aspetjournals.org/cgi/content/full/294/1/126) (http://www.ncbi.nlm.nih.gov/pubmed/10871303); (http://hardcorephysiologyfun.blogspot.com/2009/01/purines-and-orotic-acid-in-porphyrias.html)]. Acid in the stomach protonates functional groups on organic molecules and does not produce random, degradative reactions. Even S-adenosylmethionine, which is very labile, only degrades to a meaningful degree (there are many more possible intramolecular degradative reactions with S-adenosylmethionine than with ATP) after about 8-14 hours in aqueous solutions (and the rate is actually slowest in acidic solutions). I have those old articles, and I'll try to cite them sometime. But, in my opinion, the rate of degradation of purines by acidic, intramolecular degradative reactions, to the extent that they would occur at all, would be extremely slow. Once a nucleotide is dissolved in water, either before ingestion or in the luminal fluid of the stomach, the solution of water and its solutes can enter the upper intestine almost instantly upon ingestion or, relatively quickly but not necessarily as quickly, upon entry into solution in the intraluminal fluid (which may, along any given portion of the stomach or upper intestine, be, as discussed in articles in pharmacology journals, as little as 10-15 mL). This entry of predissolved solutes into the upper intestine is the same way glucose or fructose from a soft drink or "orange juice" or whatever other substance is absorbed into the body, such as when a person with diabetes drinks a soft drink to elevate his or her blood sugar. Some of the solution enters the upper intestine almost immediately upon ingestion. Solid food moves much more slowly out of the stomach and may remain there for up to 12-14 hours (I don't have time to cite the long article I have on this). But water-soluble, dissolved solutes are subject to no such temporal barrier to entry into the duodenum and jejunum.

All or most of the ATP could be dephosphorylated by intestinal epithelial cells, anyway, and this would still probably not limit the bioavailability, in my opinion. If ATP or a nucleoside monophosphate (a nucleotide is a nucleoside that has been phosphorylated and is a monophosphate, diphosphate, or triphosphate) were dephosphorylated before it either had diffused, by paracellular diffusion, between jejunal epithelial cells, or entered the cytosol of a jejunal epithelial cell, this "early dephosphorylation" could, in fact, limit its solubility, in my opinion. But I think the rate of absorption would be so rapid as to preclude that effect. I don't have time to link to the many websites on the internet that provide solubility data for nucleosides, but adenosine and guanosine, for example, are much less soluble than adenosine monophosphate disodium and guanosine monophosphate disodium are, for example. Again, I don't see how the dephosphorylation could occur before the rapid absorption of a nucleotide in a capsule or solution. There are alkaline phosphatase enzymes that, I think, may be able to dephosphorylate triphosphorylated nucleosides (such as ATP) prior to carrier-mediated transport or paracellular diffusion, but I think those "digestive, enzymatic dephosphorylation" reactions are slow in comparison to the rate of passive diffusion, intercellularly, of small molecules across the intestinal barrier. People tend to overestimate the "strictness" of the intestinal absorption barrier, as I've discussed in recent postings. I've seen many articles refer to the fact that nucleotides probably are absorbed en masse by paracellular, passive diffusion, and the fact that the nucleotides have a net charge, on the phosphate groups, is, in my opinion, of little consequence or concern.

A separate issue is the rate of degradation of guanosine to guanine and xanthine and uric acid (urate), by guanine deaminase and xanthine oxidoreductase/xanthine oxidase, and of adenosine to inosine, xanthine, and urate in either the intestinal tract or elsewhere. The authors of one article I have cite evidence that xanthine oxidase activity is lower in the fasting state (meaning before breakfast, not between meals), and that might suggest that some small bioavailability enhancement could occur in the fasting state. Incidentally, the authors of several articles refer to guanosine as being, perhaps, more absorbable than other free nucleosides, and it's interesting that its plasma concentration is normally extremely low. The CSF concentration is something like 500 nM, I think, though, in rodents, and guanosine nucleotides are exported from the liver or kidneys following fructose loads, etc. Guanase (guanine deaminase) is widely distributed throughout the body. The reason bioavailability is so important with purines is that purines are metabolized extremely rapidly by every cell in the body, including red blood cells.

There are many articles showing that oral guanosine, dissolved in the drinking water of rats, is bioavailable enough to enter the brain and produce neuroprotection against otherwise-lethal experimental treatments with neurotoxins (glutamatergic drugs, such as kainate and quinolinic acid, that exert convulsant effects, etc.) (see here for some of the references: http://hardcorephysiologyfun.blogspot.com/2009/01/anticonvulsant-effects-of-oral.html). But the researchers ultimately had to switch to using guanosine monophosphate disodium, which is much more soluble than guanosine. But those and other articles suggest to me that free adenosine and guanosine nucleotides have significant bioavailability. I don't like to do this, but it's just potentially a really complicated issue for people. There are large numbers of pitfalls, etc. For example, nucleotides that are not in free form, such as in RNA or DNA chains, would also not be expected to be bioavailable, in my opinion. The rate of phosphorolysis of the nucleotides would, in my opinion, be too slow to make up for the rapid degradation of the nucleotides into uric acid in the intestinal tract. I'd suggest that anyone who reads this consult with his or her doctor before using any products, obviously. I have no idea what the percentage of the individual nucleotides is, but the molar masses might be expected, in an equimolar mixture, to make the contents or percentages "weigh" in favor of the purines [meaning that more than 1/2 of a mixture would be purines (guanosine monophosphate + adenosine monophosphate)]. I might be getting that backwards, though, because I don't have time to do the math. Inosine is fairly widely available, as far as I know. Anyone using purines should really have his or her uric acid levels monitored at some times, even though the uric acid production from adenosine tends to be lower than the production from inosine and guanosine. I have an article that compares the different purines in those terms, but it's not the only way to compare them. An over-the-counter source of uridine has been used in some clinical trials in people with HIV-associated lipoatrophy/lipodystrophy.

I don't know if anyone is reading this, but I'm just trying to provide this information and am not trying to make any other types of assessments, etc. There are many articles on the use of nucleotides in animal research (http://scholar.google.com/scholar?q=dietary+nucleotides&hl=en&lr=), but it's important to note that, in my view, the amounts of nucleotides and nucleosides that exist in foods and that are likely to be able to reach the brain, after one ingests the foods, are likely to be very small. I talked about it in the link to my past posting, below, but there's, arguably, a limit to the dosage of purines that a human can reasonably consume, given the production of uric acid from the purines.

Thursday, January 29, 2009

Clues to Mechanisms of Excessive-Pyridoxine-Induced Peripheral Neuropathy

Keep in mind that I haven't looked at these articles, and I'm not suggesting that anyone would want to try to protect against peripheral neuropathy due to excessive pyridoxine dosages. But understanding the mechanisms helps one understand vitamin B6 metabolism. The peripheral neuropathy isn't just going to suddenly appear at 201 mg/d and be absent at 200 mg/d. The same mechanisms are probably at work at the lower doses, but the same mechanism (activation of some enzyme) could produce a protective effect at one dose of pyridoxine and toxicity at another, for example, depending on any number of factors. Or the "toxic mechanism" could just work at a low level at lower doses. Here's an article showing that glutamate at 0.5 g/kg bw/d (by intraperitoneal infusion) protected against peripheral neuropathy caused by pyridoxine (vitamin B6) [Arkaravichien et al., 2003: (http://www.ncbi.nlm.nih.gov/pubmed/12909271)]. Arkaravichien et al. (2003) also found that 1 mg/kg bw/d, i.p., worsened the neuropathy. Kaneda et al. (1997) [Kaneda et al., 1997: (http://www.ncbi.nlm.nih.gov/pubmed/9098696)], "in contrast," found that pyridoxine protected against glutamate neurotoxicity in cultured neurons. That's not a fair comparison, for various reasons. But pyridoxine has crazy and complex effects. One article I read recently discussed the fact that coenzymated pyridoxine is a cofactor for over 100 different enzymes.

Mechanisms of Neuroprotection by Vitamin B6: Potential Involvement of the Malate-Aspartate Shuttle

This article [Geng et al., 1995: (http://www.ncbi.nlm.nih.gov/pubmed/8848291)] is interesting, and the authors found that the neuroprotective effects of pyridoxine (vitamin B6) in cultured neurons were evidently dependent, in part, on the PLP (pyridoxal 5'-phosphate, or coenzymated pyridoxine) mediated increases in the activities of one or more transaminase enzymes. Geng et al. (1995) found that the pyridoxine-mediated neuroprotection could be blocked by the GABA-A receptor antagonist picrotoxin or by ifenprodil, an antagonist at the polyamine binding site on the NR2B subunit of the NMDA receptor (more specifically, for polyamine-sensitive epsilon2 NMDA receptors, containing NR1A and NR2B subunits) [Gallagher et al., 1996: (http://www.jbc.org/cgi/content/full/271/16/9603) (http://www.ncbi.nlm.nih.gov/pubmed/8621635?dopt=Abstract)]. Polyamines can either inhibit or activate NMDA receptors, and the effects of polyamines (such as spermidine, spermine, putrescine, N-acetylspermidine, etc.) depend on the extracellular (synaptic) concentration of glycine (Gallagher et al., 1996). Pyridoxine could increase glycine availability by increasing the activities of the cytosolic or mitochondrial serine hydroxymethyltransferase (cSHMT or mtSHMT) enzymes in neurons, but PLP is a cofactor for ornithine decarboxylase [Hillary et al., 2003: (http://www.ncbi.nlm.nih.gov/pubmed/12686127)]. Ornithine decarboxylase is one of the rate-limiting enzymes required for polyamine biosynthesis. The pyridoxine probably stimulated polyamine biosynthesis. Geng et al. (1995) speculated that the GABAergic neuroprotection had occurred through a PLP-induced increase in glutamic acid decarboxylase activity.

The involvement of aminotransferase enzymes in the neuroprotection is really interesting, and Geng et al. (1995) suggested that the additional, exogenous pyridoxine had activated PLP-dependent aminotransferase enzymes that normally maintain the malate-aspartate shuttle. The malate-aspartate shuttle normally maintains the activities of the mitochondrial tricarboxylic acid cycle enzymes, and inhibition of the malate-aspartate shuttle per se inhibits mitochondrial ATP production (http://hardcorephysiologyfun.blogspot.com/2009/01/cytosolic-redox-state-and-tca-cycle.html). That's a really interesting mechanism, and the effects of pyridoxine on mitochondrial activity tend to get overlooked. It's strange, because there's evidence that the mitochondrial damage to the liver in pyridoxine deficiency, referred to as fatty liver disease [Inubushi et al., 2005: (http://www.ncbi.nlm.nih.gov/pubmed/16179747)], is potentially more severe than the damage produced by depletion of other cofactors (pyridoxine depletion has been said to produce "florid cirrhosis," preceded by the mitochondrial injury that characterizes fatty liver disease) [Lumeng et al., 1974: (http://www.ncbi.nlm.nih.gov/pubmed/4359937)]. The inhibition of the malate-aspartate shuttle, by an adequate pool of PLP, could be one mechanism that would help explain the mitochondrial damage that occurs in pyridoxine deficiency. Almost all of the B-vitamins have been shown to produce fatty liver in animals and, directly or indirectly, in humans as well. But the effects of pyridoxine on mitochondrial activity have been really overlooked. The neglect and lack of awareness of those effects is probably a result, in part, of the capacity of pyridoxine to produce peripheral neuropathy at doses larger than 150-200 mg/d in the long term. There must be some mechanism for that neuropathy, but no one knows what it is.

Wednesday, January 28, 2009

Nitric Oxide as a Xanthine Oxidase Inhibitor

This article [Maxwell et al., 2001: (http://content.onlinejacc.org/cgi/content/full/38/7/1850) (http://www.ncbi.nlm.nih.gov/pubmed/11738284?dopt=Abstract)] shows that an acute increase in nitric oxide production can decrease serum uric acid levels. In that article, the researchers used L-arginine to increase nitric oxide production, and L-arginine is a substrate for the inducible nitric oxide synthase (iNOS) enzyme, expressed in endothelial cells and numerous other cell types, and endothelial nitric oxide synthase (eNOS) enzyme. This type of effect is assumed to be desirable in the context of hyperuricemia, and it may be desirable for some people. One message of this article is that the traditional explanation of hyperuricemia [that it results from the underexcretion of urate by the kidneys] may underestimate the extent to which ischemia, in which a reduction in nitric oxide availability tends to occur, contributes to hyperuricemia (by disinhibiting xanthine oxidase activity). Nitric oxide inhibits xanthine oxidase activity, and this is sort of the background for the use of inosine in multiple sclerosis. People with multiple sclerosis tend to have elevated blood levels of NOx (nitrate + nitrite, etc.), higher uric acid levels are associated with lower NOx levels in the blood, and inosine is meant to decrease iNOS-derived nitric oxide by elevating serum uric acid levels. But this effect of nitric oxide would help provide a mechanistic explanation for the elevated NOx levels in people with multiple sclerosis (the nitric oxide could be inhibiting xanthine oxidase, etc.). Excessively-high nitric oxide levels are almost guaranteed to increase peroxynitrite production, and uric acid is arguably the primary peroxynitrite scavenger in humans. But high nitric oxide levels also inhibit mitochondrial function and could interfere with de novo purine biosynthesis in endothelial cells and, at least in the long term, reduce uric acid levels by that type of mechanism. There are some articles showing that serum ferritin is positively correlated with serum uric acid levels, but that is pretty clearly a bad thing. I remember the authors suggested that the high serum ferritin levels were producing a toxic inhibition of nitric oxide production and thereby disinhibiting xanthine oxidase activity (and increasing uric acid levels). But that's an extreme example, and inosine elevates uric acid by a completely different set of mechanisms. No one is suggesting that elevating serum ferritin levels would be a good way to elevate uric acid production.

Strict Standards for Evaluating Vitamin B12 Status?

The author of this article [Solomon, 2005: (http://bloodjournal.hematologylibrary.org/cgi/reprint/105/3/978.pdf) (http://www.ncbi.nlm.nih.gov/pubmed/15466926)] discusses the way rigid standards for vitamin B12 (cobalamin) deficiency should not be used in the context of blood tests, given that many people have "normal" serum cobalamin or methylmalonic acid levels and nonetheless have severe neuropathy, for example, or other conditions that respond to cobalamin repletion. The author also discusses the advantages of methylcobalamin over cyanocobalamin, although the major advantage is that methylcobalamin is "not cyanocobalamin." Anything (any of the two commonly used forms of vitamin B12 that are not cyanocobalamin, namely hydroxocobalamin and methylcobalamin) is better than cyanocobalamin, in terms of transport into cells or mitochondria, in terms of the absence of an added cyanide burden from the cyanide moiety on cyanocobalamin, etc.

But the main point of that article is that a normal serum cobalamin level will not necessarily guarantee that the functional effects of cobalamin are "normal" in different tissues, and the use of "rigid" standards for deficiency vs. sufficiency would essentially mean that large numbers of people would go untreated for conditions, such as severe peripheral neuropathy, that would otherwise be ameliorated by cobalamin repletion. I collected some more dose-response "data" on the serum B12 responses to different doses of methylcobalamin, in those articles discussing the use of methylcobalamin to treat sleep disorders, and the serum B12 levels in response to dosages of methylcobalamin of 3 mg/d are really low and quite variable between individuals. I'll post the values from the articles, but I want to try to collect data from lots of different articles. It's really hard to come by that type of information, because the dosages are chosen arbitrarily. Many researchers refer to 5-mg dosages of oral forms of vitamin B12 as "massive," and those types of dosages may not even elevate a person's serum B12 much above the normal range. For example, one person might get a five-fold increase in serum B12 in response to 3 mg/d, and another person might just elevate his or her level to the upper limit of the normal range. I think there's a lot of room for different dosages, but using higher dosages of forms of vitamin B12 tends to not do much in the absence of concomitant supplementation with adequate dosages of L-methylfolate or L-leucovorin or folic acid. A certain percentage of a dose is absorbed by an intrinsic-factor-independent process (this is well-known now), probably by passive diffusion (paracellular diffusion) in the upper intestine. But there's probably interindividual variation in the extent to which this occurs, and the percentage that's absorbed may not remain constant over a range of different dosages. One really should, ideally, get serum B12 tests and work with one's doctor to adjust the dosage, etc.

But the serum B12 levels that people get from injectable forms of vitamin B12, from their doctors, tend to be much higher than most of these levels that one sees, in the literature, even from 3-6 mg/d of methylcobalamin. The normal range of serum B12 values does not really reflect anything except the types of values one sees in a population that gets vitamin B12 mainly from food. One can only get a certain amount of vitamin B12 from food, but that doesn't mean that the normal range defines a range of serum B12 values that are "physiological" or physiologically normal. The most common error in reasoning I see in journal articles is the idea that population norms reflect physiological norms, and it's just not the case. I've gone through the estimates of the intracellular cobalamin levels in relation to the Km's for the binding of cobalamin-derived coenzymes to their respective apoenzymes, and those data argue against the idea that the normal range of serum B12 levels represents some vague concept of "physiological" saturation or who-knows-what.

Monday, January 26, 2009

Parathyroid Hormone, Magnesium, Vitamin D, and Mast Cell Degranulation

There's also this interesting effect of parathyroid hormone (PTH) on calcium influx into mast cells. PTH can be elevated in kidney disease and cause pathological itching and histamine release from mast cells in the skin [Szepietowski, 1998: (http://www.ncbi.nlm.nih.gov/pubmed/9585892)]. That article mentions the fact that doctors sometimes perform parathyroidectomies to treat the pruritus, and PTH is known to produce excessive calcium influx in other cell types. I know there are some articles suggesting that some of the effects of vitamin D on insulin metabolism might be due to the suppression of PTH hormone levels and the consequent suppression of excessive calcium influx into adipocytes or other cell types [McCarty and Thomas, 2003: (http://www.direct-ms.org/pdf/VitDNonAuto/MccartyPTHObesity.pdf) (http://www.ncbi.nlm.nih.gov/pubmed/14592784)]. PTH levels decrease as the 25(OH)D level increases, as an indicator of a higher vitamin D intake. An adequate vitamin D intake also increases magnesium absorption, and magnesium can have complex effects on PTH release and on the responsiveness of cells to PTH. I think the general idea is that adequate magnesium status reduces "PTH resistance," meaning that magnesium tends to increase the responsiveness of cells to PTH but to improve the feedback regulation of PTH release. But that could be another mechanism by which magnesium regulates mast cell histamine release (via "improved" regulation of PTH release or responsiveness). Moderate vitamin D intakes also increase calcium absorption, and increases in serum calcium could increase mast cell histamine release. But that's part of the idea behind my suggestion that taking more than a reasonable amount of calcium (1,000 mg or whatever reasonable amount, from food and supplements), in combination with vitamin D, could, particularly in the context of the chronically-low magnesium intakes of many people (given the removal of most magnesium from grains during processing, etc.), produce undesirable effects.

Magnesium, Mast Cells, Histamine, and Adrenergic "Mast Cell Stabilization"

This article shows that magnesium depletion acutely increases histamine release from mast cells in a dramatic fashion [Kraeuter and Schwartz, 1980: (http://jn.nutrition.org/cgi/reprint/110/5/851) (http://www.ncbi.nlm.nih.gov/pubmed/6445415?dopt=Abstract)]. Although those authors found that the histamine levels decreased subsequently, the increased numbers of mucosal or submucosal mast cells persisted. That in itself is not desirable, given that mast cells release many other pro-inflammatory substances other than histamine. Those substances include all sorts of cytokines, such as tumor necrosis factor-alpha, and proteases that tend to degrade the extracellular matrix and lead to secondary inflammatory processes, etc. Mast cells are present at many sites in the body and contribute to lots of disease processes.

That article isn't the best example, and the sheer numbers of articles on magnesium complicate the task of organizing the articles. The overall idea that's been shown in many articles is that magnesium decreases histamine release from mast cells by essentially acting as a calcium channel antagonist, given that calcium influx into mast cells causes the histamine-containing storage granules (and other protein mediators, such as cytokines and proteases) to undergo exocytosis (degranulation). Magnesium decreases calcium influx and buffers calcium influx, and this tends to decrease histamine release. Mast cells can also undergo piecemeal degranulation and release small amounts of mediators slowly, and the increases in numbers of mast cells, as observed in chronic magnesium deficiency (Kraeter and Schwartz, 1980), could reasonably be expected to increase piecemeal degranulation. Mast cells have been heavily implicated in atherosclerotic progression, and magnesium has strong effects on the dynamics of histamine release from mast cells. Histamine release also depletes intracellular magnesium from red blood cells (RBC) (I have the article on my computer but am not up for organizing references right now), and the effect is dramatic enough to be detectable in humans in vivo (upon the extraction of RBC).

Another layer to this is the capacity of adrenergic activity (catecholamine release, catecholaminergic transmission in the brain and peripherally) to deplete intracellular magnesium from various cell types but to simultaneously, via beta2-adrenoreceptor activation, in particular, "stabilize" mast cells (i.e. decrease histamine release, stabilize the "loose-cannon" aspect of mast cell degranulation). Beta2-adrenoreceptor agonists, used in asthma, are known to produce this effect, and a similar effect can occur in response to something like pseudoephedrine, taken during a cold or for allergies, etc. But catecholaminergic activity, such as beta2-agonists "mediate" (they bind to receptors for noradrenaline/adrenaline), depletes magnesium from numerous cell types. This tends to lead to decreases in magnesium availability to mast cells, and the catecholamine-mediated depletion of magnesium may limit the "mast-cell-stabilizing" effects of adrenergic activity, in the long term. Exercise increases catecholaminergic transmission, and high-intensity exercise, such as resistance exercise (weights), can drastically increase it. That's one mechanism by which exercise can produce, particularly in the absence of some sort of attempt to compensate for the effect, intracellular magnesium depletion. Magnesium also tends to directly decrease catecholaminergic activity or buffer catecholamine release, and this tends to be desirable in the context of something like exercise. One reason has to do with the fact that massive catecholamine release from sympathetic nerve terminals, such as during intense exercise, can derange the autoreceptor-mediated regulation of the firing rates of the adrenergic neurons. That's one imprecise explanation, but magnesium can also enhance adenylate cyclase activity somewhat and influence adrenergic activity by that mechanism or by the preservation of adenine nucleotide pools intracellularly, etc. One of the direct mechanisms responsible for the capacity of magnesium to decrease catecholamine release is that calcium influx is an important mechanism that regulates neurotransmitter release. But the presence of an adequate pool of magnesium can actually potentiate catecholaminergic transmission, in the context of the long-term "hypersympathetic" bias that tends to occur as people age, or influence the extents to which different receptor subtypes are activated and thereby "improve" or normalize adrenergic transmission. That's relatively imprecise, but a sustained and persistent increase in adrenergic activity tends to be detrimental to the regulation of adrenocorticotropic hormone (ACTH) release and to the feedback inhibition of corticotropin releasing hormone (CRH) release from different cell groups in the brain, etc.

Note on Inulin

Here's an article that gets at the way inulin behaves in the body. The authors are saying that in adult animals, inulin, once in the blood, slowly crosses the blood-brain barrier (BBB), from the blood to the brain, but that the rate of clearance of inulin from the CSF is more rapid than the rate of uptake into the brain. This creates a concentration gradient across the blood-brain barrier and prevents an equilibrium from being established. The BBB is not totally impermeable to solutes, and noradrenergic activity, from physical exercise, or any number of other factors can temporarily permeabilize the BBB. It's worth noting that inulin cannot be degraded in the body and would have to be excreted by either the biliary (liver) or renal routes [Ferguson and Woodbury, 1969: (http://www.springerlink.com/content/m3511570224q6828/)]. I personally think it's disturbing that that's being sold.

Disturbing Articles on Inulin

I saw on t.v. that inulin is still being used in some dietary "fiber" supplements. This article [Ma et al., 1991: (http://www.ncbi.nlm.nih.gov/pubmed/2035637)] shows that inulin actually can be absorbed through the intestinal tract intact, and that's really disturbing. Inulin is used in animal research to evaluate kidney function, among other applications, but it distributes to the extracellular fluid by some kind of bizarre interaction with albumin. It tends to distribute extracellularly and to influence the intravascular and extravascular fluid balance by that effect. I think it's similar to the effect that a disturbance in the serum albumin-to-globulin ratio can have on the extracellular fluid volume, as in ascites in liver disease (as in a "positive albumin gradient"). I've seen inulin used for all sorts of applications like that in animal research, for "labeling" the extravascular compartment, etc. To the extent that inulin could be absorbed, it could potentially cause hyperosmotic stress and disturb endothelial cell barrier functions in the blood vessels. I don't want to think about what could conceivably happen at the blood-brain barrier or blood-CSF barrier.

Inulin has a molecular weight of ~5,500 Da, and the traditional rule is that substances with that high molecular weights are not absorbed into the blood intact and cannot cross the intestinal barrier. But the authors discuss the compact quality of the inulin molecule and suggest that it can simply be absorbed by the paracellular pathway, by diffusing between the epithelial cells. Consistent with its potential for creating osmotic disturbances, Ten Bruggencate et al. (2006) [Ten Bruggencate, 2006: (http://jn.nutrition.org/cgi/content/full/136/1/70) (http://www.ncbi.nlm.nih.gov/pubmed/16365061?dopt=Abstract)] found that inulin disrupted intestinal barrier function in humans. That first article I cited (Ma et al., 1991), in particular, is very disturbing to see, given that some products contain inulin. That molecular weight, 5,500 Da, is much lower than the molecular weights of most other so-called dietary fibers or cellulose-based grains, etc., and the low molecular weight, as discussed by Ma et al. (1991), are a significant determinant of the capacity for a non-degradable substance to be absorbed. Gay-Crosier et al. (2000) [Gay-Crosier et al., 2000: (http://www.ncbi.nlm.nih.gov/pubmed/10798950)] found that inulin caused anaphylactic reactions, severe allergic reactions, in some people. Sounds fantastic.

Articles on Pantothenic Acid (Vitamin B5)

This article [Bean et al., 1955: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?rendertype=abstract&artid=438858) (http://www.ncbi.nlm.nih.gov/pubmed/14392222)] is really interesting. It's old, but the authors induced pantothenic acid (PA, vitamin B5) deficiency in several people and found that the people, who had displayed sunny dispositions, soon "became quarrelsome, sullen, and petulant," felt sleepy and ended up spending a lot of time in bed, and developed peripheral neuropathy, low plasma cholesterol, and abnormalities of adrenal steroid biosynthesis. Kuo et al. (2007) induced PA deficiency in mice and found that the animals developed remarkably severe neurological symptoms and other abnormalities. The dystonia that the mice developed is consistent with the types of motor abnormalities that one would expect to result from damage to or iron deposition in the globus pallidus or basal ganglia in general, and Kuo et al. (2007) note that iron deposition in the globus pallidus, a group of neurons in the basal ganglia, occurs in humans with Hallervorden-Spatz syndrome (HSS or pantothenate kinase-associated neurodegeneration). HSS is a genetic disorder that reduces coenzyme A (CoA) levels by reducing the activity of pantothenate kinase, the first enzyme in CoA biosynthesis. Smith and Song (1996), cited below, discuss all of the neurological symptoms and hematological abnormalities that occur in animals depleted of PA. Researchers used to say that PA repletion produces more "cholinergic" effects than anything else, given that a major CoA-containing species is acetyl-CoA and that acetylcholine biosynthesis is sensitive to acetyl-CoA availability. But the article by Bean et al. (1955), showing somnolence and irritability, suggests, in conjunction with the dystonia observed in animals and the involvement of the basal ganglia and globus pallidus, more of a dopaminergic effect than a cholinergic effect. Those are somewhat vague and general statements that I'm making, but that's my subjective sense of the effects of PA on the brain (that its effects are probably as much catecholaminergic as cholinergic).

Even though I generally think the use of nucleotides and other approaches produce more potent biological effects than other nutrients, such as PA, it's important to realize that there's really no way to evaluate a person's PA status [Smith and Song, 1996: (http://cat.inist.fr/?aModele=afficheN&cpsidt=3155583)]. It's easy to dismiss articles that use large doses of specific vitamins or nutrients and find therapeutic effects, and I don't claim to know all of the mechanisms that could account for the effects of apparently supraphysiological dosages of various cofactors or cofactor-precursors, such as PA. But the dosages are apparently not supraphysiological, because the effects can be accounted for in terms of the known effects of the cofactor(s). Malabsorption can produce very dramatic decreases in the absorption of some nutrients, and, given the high prevalence of fatty liver disease in the population at large, for example, it is not unreasonable to expect that malabsorption would affect the absorption or enterohepatic recycling of nutrients in certain numbers of people. Heubi et al. (1997) [Heubi et al., 1997: (http://www.ncbi.nlm.nih.gov/pubmed/9285381)] found that, in children with cholestatic liver disease, massive dosages of magnesium were required, for many months, to overcome the intraluminal, unabsorbed free-fatty-acid-mediated (or triacylglycerol-mediated) or bile acid-mediated decreases in the intestinal absorption of magnesium. The mean dosage required was 11 mg/kg, which is 770 mg/d for a 70-kg adult, but some individual children required up to 34 mg/kg/d. That would be 2,380 mg/d for a 70-kg adult. Unabsorbed fatty acids and similar compounds bind to magnesium by ionic interactions, and the effect on magnesium absorption can evidently be really significant. Most people wouldn't need to take that much magnesium under any circumstances, and there's some potential for hypermagnesemia at doses higher than 2,000-3,000 mg/d, mainly but not exclusively in people with kidney disease. But the point is that problems like that can be severe. I was really surprised at the magnitude of that deficit in absorption, and many months were required to correct the overt deficiency states that those children were experiencing. When I used to see articles that mentioned malabsorption, I thought that explanation sounded far-fetched or vague. But it's not necessarily vague at all, and the effects can be significant.

I've just seen that most of the people who dismiss the idea that "expected" physiological responses to cofactors can occur at high doses seem to not have a good understanding of the metabolic functions of the cofactors or the manifestations produced by the depletion of the cofactor(s). When I forget about some of these articles, I tend to develop a dismissive attitude toward things like PA, too. But, for example, one way of explaining the effects of large doses is to recognize that reversing a deficiency may, in the short term, produce pronounced responses to large doses. Bean et al. (1955) used 4,000 mg/d for six days, to replenish the PA stores (the CoA pools), and then used 2,000 mg/d for the subsequent 20 days. Presumably, lower doses could have been used subsequently. One could use a similar explanation to account for the supposed effects of 10,000 mg/d of PA in this article [Leung, 1995: (http://www.ncbi.nlm.nih.gov/pubmed/7476595)].

In one article I have on my computer, the author of a hastily-researched, superficial review article on PA makes the statement, early in the article, that PA deficiency is "rare," but the case may simply be that no one is looking for evidence of PA depletion in anyone. There's no reference to support that statement, by the author of the article, that PA deficiency is rare, but, even aside from the lack of any research to support the statement, how would anyone know the extent to which PA deficiency is either common or rare? No one does any screenings for PA status, and there's no reliable test to evaluate PA status. I'm just saying that it's easy to say something is rare or that there's no good evidence for such-and-such, but those types of statements aren't necessarily true. CoA participates in vast numbers of enzymatic reactions, and, incidentally, that's part of the reason for the difficulty in evaluating PA status.

Also, there can be a tendency to group all nutrients together and make arbitrary dosage choices. For example, there are reasons to think that the dosages for vitamin B2 [given its potential for exacerbating redox cycling reactions and for enhancing clotting factor biosynthesis, by accelerating the cytosolic, "warfarin-insensitive" reductase enzymes that participate in the vitamin K cycle (http://hardcorephysiologyfun.blogspot.com/2009/01/vitamin-b2-riboflavin-and-vitamin-k.html)], vitamin B3 [given the potential for NAD+, which most of a dose of niacinamide is converted into, to enhance PARP activity and causes ATP depletion under certain circumstances (http://hardcorephysiologyfun.blogspot.com/2008/12/nonoxidative-pentose-cycle-prpp-and.html)], and lipoic acid [given that the biosynthesis of the lipoyl- cofactors for different components of the pyruvate dehydrogenase complex occurs after the octanoyl- precursor has become bound to the enzymes and that exogenous lipoic acid can inhibit the pyruvate dehydrogenase complex, rather than activate it (http://hardcorephysiologyfun.blogspot.com/2009/01/inhibition-of-activity-of-pdhc-by-r-or.html)] should arguably be limited under various circumstances, and the full, therapeutic dosage ranges for cofactors with fewer dose-limiting side effects tend to not be widely recognized or known by people.

Sunday, January 25, 2009

Controversy About Dietary Protein Requirements and "Nitrogen Balance"

These are some articles that discuss flaws in the reasoning and methodologies that go into estimates of "nitrogen balance" and protein requirements for different groups of people [Kurpad and Young, 2003: (http://jn.nutrition.org/cgi/content/full/133/4/1227) (http://www.ncbi.nlm.nih.gov/pubmed/12672948?dopt=Abstract); Fuller and Garlick, 1994: (http://www.ncbi.nlm.nih.gov/pubmed/7946519); Millward, 1998: (http://jn.nutrition.org/cgi/content/full/128/12/2563S) (http://www.ncbi.nlm.nih.gov/pubmed/9868206?dopt=Abstract)]. The articles by Millward (1998) and Fuller and Garlick (1994) are pretty devastating critiques of the experimental methods that go into measuring amino acid metabolism and even of the concept of nitrogen balance. The concept of "nitrogen balance" has never seemed very useful to me, and these articles highlight a lot of the flaws with the concept.

I think one has to look at these requirements or "recommendations" for protein intake and ask oneself if the numbers seem reasonable. It just doesn't make sense to me that a protein intake of 0.6 g/kg/d, a number that one commonly sees and that these articles discuss, would be adequate. How could it be? That's 42 grams of protein for a 70-kg person (154 lbs). That seems just bizarre to me. A lot of these articles on protein are really difficult to follow, but the concept of nitrogen balance has always seemed to me to be like something related to animal feed that's left over from the 1940s. Maybe that's too harsh an assessment, but the concept seems barely scientific to me. If I invented a concept such as "CO2 balance," based on some variation of the measurement of the volume of CO2 expired (a procedure that is not called "CO2 balance" but that is used in some research looking at overall metabolic rates, in closed, experimental "chambers," for example), that wouldn't allow me to draw sweeping conclusions about the details of intracellular metabolism. But the concept of nitrogen balance attempts to do that, to some extent, and there are some significant issues with the concept.

It's become clear that some types of physical exercise do increase protein requirements, but the changes in the requirements vary with the intensity and duration and type of exercise. One major reason that protein requirements are higher for athletes is that branched-chain amino acids are oxidized for fuel in the skeletal muscles. There will probably continue to be a lot of controversy about the question of protein "requirements," but I've learned that the assumptions that go into a lot of these rigid rules, about nutrient requirements, can be seriously flawed. There's evidence that low dietary protein intakes reduce the mitochondrial DNA contents in cells in the liver and muscles, contribute to anemia in elderly people, in particular, etc. There's also research showing that high-protein diets increase the vitamin B6 requirement substantially, and I'm sure there's lots of similar research. The branched-chain alpha-keto acid dehydrogenase complex, which is the major multienzyme complex that regulates branched-chain amino acid oxidation during exercise, is, incidentally, a thiamine-derived-cofactor-dependent enzyme complex (the overall activity of the enzyme complex is vitamin B1-dependent).

Saturday, January 24, 2009

Tables for Zinc and Copper Contents of Foods

Here are the USDA pdf's on the zinc contents (http://www.nal.usda.gov/fnic/foodcomp/Data/SR17/wtrank/sr17a309.pdf) and copper contents (http://www.nal.usda.gov/fnic/foodcomp/Data/SR17/wtrank/sr17a312.pdf) of different foods. Those tables give a person a crude sense of things. The table on copper has a pretty major typo at the top. The values, such as 0.111 or whatever, for a hamburger, are 0.111 mg, not ug (micrograms). So 0.060 is 60 micrograms, and 0.111 is 111 micrograms (about a tenth of a milligram), etc. This table on the Wilson's disease site gets the units correct: (http://www.wilsonsdisease.org/copper.html).

Note on Copper Oxide

I also meant to say that making some kind of long-term plan for copper intake around something like copper(I) oxide is not going to really work, given its apparent insolubility (it can't be absorbed at all, if it's insoluble) and, consequently, lack of bioavailability. It wouldn't necessarily matter, given that many people seem to have copper overload without necessarily even having eaten a lot of copper-containing foods. It's really strange that copper is in so many of these "healthy" foods, too. Maybe there are some factors in some foods that prevent copper absorption or something, but it's fairly strange that so many supposedly-healthy foods have significant amounts of copper in them. I don't have any real point in saying that, and I don't know what to make of it.

Copper and Zinc: Complexities and Simplifications

I thought I'd just mention the issue of copper intake again and just put a couple of basic articles up. I don't think zinc supplementation is a good idea, and copper supplementation is potentially really hazardous, too, except in tiny dosages (maybe 250-500 micrograms/d). The overall picture is that copper is required for iron export from cells (the "ferroxidase" activity comes from ceruloplasmin itself, I think), and iron can build up in astrocytes and cells in the bone marrow in copper deficiency. Copper is also required for complex IV activity (cytochrome c oxidase activity), superoxide dismutase, lysyl oxidase (required for collagen formation), lysyl oxidase related proteins (I forget the names) that are involved in axonal transport in some way, and I'm forgetting the rest of the copper-dependent enzymes. It also binds to S-adenosylhomocysteine hydrolase and does activate it, and copper is also required for dopamine beta-hydroxylase activity (the enzyme that converts dopamine into noradrenaline) and diamine oxidase activity (degrades histamine and polyamines, etc.). The locus ceruleus, the major noradrenergic cell group in the brain, appears slightly bluish (ceruleus, as in "cerulean blue," etc.; locus ceruleus translates crudely as "collection of blue points") under some circumstances, under the microscope, and it's apparently because of the copper bound to dopamine beta-hydroxylase (or possibly because of copper storage in neuromelanin pigments or because ceruloplasmin is more abundant and is transporting more copper into the cells).

Copper binds to specific residues (such as tyrosyl residues) on enzymes (at least in some cases it does) and nonenzymatically (or "autocatalytically") is incorporated into a protein tyrosyl quinone structure (topaquinone structures and tyrosyl quinone structures, formed when copper binds to tyrosyl residues on proteins). So there's no cofactor biosynthesis. It's autocatalytic/nonenzymatic, but the transport of copper to those enzymes, for incorporation into the tyrosyl quinones and topaquinones, is normally highly regulated and direct. Almost no intracellular copper is supposed to exist as free copper ions, but oxidative stress can cause metals, such as zinc and presumably copper, to be liberated en masse from storage sites (and thereby cause massive neurotoxicity, etc.). Peroxynitrite is a major species that does that, and it's one reason for my interest in uric acid as really the primary peroxynitrite scavenger/"sink" in humans.

Excessive zinc supplementation can cause copper deficiency and has been associated, in countless articles, with neurotoxicity and bone marrow toxicity (anemia, thrombocytopenia, pancytopenia). There's this assumption that all of the neurotoxicity and bone marrow toxicity is from zinc-induced copper depletion, but I don't think that's the case. Free zinc has a vast range of neurotoxic effects, and most of the effects have nothing to do with copper. Small amounts of copper are required for normal iron transport, normal mitochondrial activity, and collagen formation and axonal transport, and for SAHH and dopamine beta-hydroxylase activity, etc. But the point is that getting more than the RDA seems risky to me. There are case reports of people having grave bone marrow toxicity that appeared to be due to severe copper depletion, and I'm sure it does play a role in the zinc-induced toxic effects. But zinc could also cause bone marrow toxicity by all sorts of mechanisms that have nothing to do with copper. Here's one article that shows copper deficiency and zinc toxicity, but has the copper ever been shown to promote neurological recovery? (http://www.ncbi.nlm.nih.gov/pubmed/15834043?dopt=Abstract). I don't feel like looking for other articles. There's something about the wording or keyword choice in the articles on zinc toxicity. They don't come up on searches properly, and you need to try lots of different words like zinc+supplement+(thrombocytopenia OR cytopenias OR anemia OR pancytopenia OR "bone marrow" OR neurotoxicity OR neurodegeneration)...etc. There are tons of them, and I have lots that I've downloaded over the years and that are scattered around my computer.

There is reason to think that it might be wise, if one chooses to supplement with a miniscule dosage of copper, to use a chelated form. Copper oxide may not be absorbed well and at least does not have any appreciable bioavailability. The title of this article is "copper oxide should not be used as a supplement for either animals or humans": [Baker, 1999: (http://jn.nutrition.org/cgi/content/full/129/12/2278)]. Copper oxide is actually insoluble in water, I'm remembering, as I look at that article. Something like chelated copper (II) gluconate or copper as "amino acid chelate" is likely to have a reliable degree of bioavailability. I think supplement manufacturers are no longer allowed to put the word chelated or chelate on a supplement that does not, in fact, contain a chelated mineral. The difference between a chelate and a salt is the presence or absence, respectively, of coordinate covalent bonds. "Chelated copper gluconate" contains copper bound to one or more gluconate ligands by coordinate covalent bonds, but "copper gluconate" may be just copper ions and gluconic acid, in a crystalline form, that dissociate into ions in water. Chelates tend to be absorbed by dipeptide transporters or amino acid transporters in the intestinal tract.

I'd be willing to bet that chelated copper supplements are significantly more bioavailable and absorbable than copper from many foods, and that's one reason a 250-500 microgram (or less even, I suppose) dosage of a chelated copper supplement might be one approach. I don't know who decided on the dosages of copper in some supplements, but I'd just be willing to bet that chelated copper forms are quite a bit more potent than copper from food (making the high dosages that much more questionable). That might be equivalent to twice that amount from food. I don't know exactly, but the RDA is 900 micrograms to 1,200 micrograms of elemental copper per day (0.9-1.2 mg/d). I think copper is one of those cases in which the RDA people got it right. The articles on copper retention in humans show that it's pretty difficult to become copper deficient, but the articles don't and can't really show the adverse effects of copper. It's not likely to be possible to show, for example, that 2 mg/d is worse in some people and causes more intracellular free radical stress in the brain than 1 mg/d does. But given that a person is unlikely to become deficient from 1 mg/d, it seems like a better choice to me. This article also discusses the zero-bioavailability of copper oxide: (http://www.ajcn.org/cgi/reprint/67/5/1054S). If a person eats a lot of legumes or whatever, then...I don't know...the 250 mcg copper supplement may be unnecessary. The USDA copper content or nutrient content tables are available online and can give one a sense of the contents of minerals in foods. But the advantage of taking some tiny amount of a chelated form is that one wouldn't need to worry about poor bioavailability or insolubility or lack of dissociation from food constituents, etc. A lot of people, though, seem to have a problem with excessive amounts of copper in numerous disease states, and so it's probably absorbed fairly well. The main thing, though, is that the turnover of copper is very slow, and that's why I mention this type of detailed consideration about dosages or estimating one's intake. Zinc is abundant in meats, especially, and the USDA tables give one a sense of that. There's about 3-5 or more mg of zinc in single "servings" of meat, even though the USDA tables present the information in ways that are difficult to make sense of, sometimes.

Then there's always the tablet dissolution issues with some supplements, and those issues have been written about extensively. There are some independent "lab" organizations that have information online, testing dietary supplements, and it's always worthwhile to buy from a reputable manufacturer. Some of those independent laboratory sites have found tablets that can't be broken with hammers and would not dissolve at all. There's a lot of information about that type of thing in the literature, too. It's still, unfortunately, an issue in some cases.

If a person has a tendency toward copper overload or has any liver disease or inflammatory disease state, even low doses like 100-500 mcg could be hazardous. A person would want to talk with his or her doctor before taking something like copper, even in miniscule dosages, especially if there's any disease state or history of disease. I'm just putting this summary up, because the issue is so extraordinarily mind-bending and complex. The concept of the zinc/copper ratio of supplements is not valid, in my view. Zinc supplementation, for example, can upregulate metallothionein expression in various tissues, and that can then bind iron and copper. But in what way does it make sense to then increase the copper dosage to compensate? It doesn't make sense, but that's one way of looking at the zinc/copper ratio in supplementation. Additionally, it's not possible to evaluate copper status very easily. The serum copper or ceruloplasmin can tell a person that he or she is not deficient, but even those markers are not necessarily very telling. One can give people 30 mg of zinc in a study, but does the absence of a decrease in serum copper mean that zinc is safe or that copper is not accumulating, bound in an unstable way to metallothioneins, intracellularly in various tissues? No, and I've seen articles saying that basically the serum copper and ceruloplasmin are not very useful. One article used erythrocyte Cu/Zn-superoxide dismutase activity to evaluate copper status, but the tests for copper status are not very useful.

Another potential mechanism that could lead to erroneous conclusions in research is the way zinc can be collagenolytic and increase matrix metalloproteinase enzymes' activities, and copper can increase collagen formation. Copper is also required for clotting factor V and for factor VII (if memory serves), and zinc activates several different thrombolytic proteins. But the serum zinc is maintained quite effectively, and increasing one's zinc intake through food provides smaller amounts. Some of the adverse effects of copper could be explained by those potentially-prothrombotic effects, even though the research, unconvincing as it is, suggests that factor V levels decrease in response to copper supplementation or repletion (I'm sure that tiny amounts of copper are required for those clotting factors). But if the activity increases, where's the benefit from a decrease in the serum factor V protein content? But my point with the opposing effects on collagen and other processes, including the supposed "zinc-induced neurotoxicity as being due to copper depletion" and "zinc-induced bone marrow failure being due to copper depletion" is that zinc has effects that oppose copper's effects but that don't have anything to do with copper depletion. For example, zinc can inhibit heme biosynthesis by forming zinc(II) protoporphyrin, which is actually used as a marker for ineffective erythropoiesis or low-level anemia, in many articles. But that effect doesn't have anything to do with copper. And zinc can break down the extracellular matrix by activating matrix metalloproteinases, and copper might seem to be required in x amount to help "rebuild" the collagen or prevent some marker of collagenolytic activity from increasing in the blood. But if the zinc weren't being given in such high doses, the collagen breakdown might not be so high. So there's this tendency to attribute everything about excessive zinc supplementation to copper depletion, and that assumption is sort of built into the proposed zinc-to-copper ratios, in some cases. But the bone marrow toxicity might be due to zinc's bizarre effects of inhibiting mitochondrial activity directly, as has been shown in neuronal cells, etc. In many of those articles on zinc toxicity, the copper supplementation doesn't seem to work very well in treating the toxicities (bone marrow or neuronal/astrocytic/oligodendrocytic), especially for the neurotoxicity. Given the multitude of neurotoxic effects of zinc that have been identified in the context of research on Alzheimer's disease or other neurodegenerative diseases, it's reasonable to think that a lot of the neurotoxicity in those case reports had little to do with copper depletion.

Also, any decrease in serum albumin can seriously disturb zinc transport, and compensating for that with extra zinc is potentially an unsound strategy, for many reasons, in the extreme.

Equation for Albumin-Corrected Serum Calcium

Here's a link to one of the equations used to calculate the "corrected" serum calcium concentration (corrected for the serum albumin concentration): (http://www.mdcalc.com/calciumcorrection). There are other equations, and some researchers don't think that correction needs to be done in very many people (in very many contexts or disease states, aside from renal failure). But that type of correction might be of interest to someone taking supplemental vitamin D or calcium (http://hardcorephysiologyfun.blogspot.com/2009/01/calcium-magnesium-serum-calcium-vitamin.html).

Accumulation of Oxidized Folates (Dihydrofolate and Folic Acid) in Cobalamin Deficiency: Association With Mitochondrial Dysfunction

The authors of this article [Smith et al., 1973: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=4204322) (http://www.ncbi.nlm.nih.gov/pubmed/4204322)] suggested that the development of fatty liver in sheep, in response to prolonged cobalamin (vitamin B12) deficiency, might eventually impair the capacity of cells in the liver to maintain intracellular folates in a reduced state. The authors found that prolonged cobalamin deficiency both depleted intracellular total folates and decreased the percentages of folates that were polyglutamylated, as would be expected. But they also found increases in the percentages of unmetabolized folic acid (a fully-oxidized folate) and dihydrofolate (DHF) in response to the more long-term cobalamin depletion.

This explanation by Smith et al. (1973) is consistent with the research showing mitochondrial dysfunction in response to cobalamin depletion [Leeds and Brass, 1994: (http://www.jbc.org/cgi/content/abstract/269/6/3947) (http://www.ncbi.nlm.nih.gov/pubmed/7508436?dopt=Abstract); Toyoshima et al., 1996: (http://www.ncbi.nlm.nih.gov/pubmed/8774237); Nakai et al., 1991: (http://www.ncbi.nlm.nih.gov/pubmed/1679919)], and fatty liver is essentially the result of mitochondrial dysfunction and injury in hepatocytes [Pessayre et al., 2004: (http://www.ncbi.nlm.nih.gov/pubmed/15489566)]. Given that cobalamin depletion caused the intracellar folates to become depleted, the mitochondrial dysfunction would be expected to have resulted from the adverse metabolic effects of both folate depletion and cobalamin depletion. But that's not even the issue with this. The core point of that article, by Smith et al. (1973), is that there's some disturbance in the cellular redox state that decreases the activities of the cytosolic NADPH-dependent enzymes and the mitochondrial, NAD+-dependent enzymes of the folate cycle. The increases in oxidized folates (Smith et al., 1973) would be expected to result from a decrease in the normal, cytosolic NADPH/NADP+ ratio, and a decrease in that ratio could occur in response to inhibition of the tricarboxylic acid cycle enzymes by methylmalonic acid and propionic acid (Toyoshima et al., 1996; Nakai et al., 1991). The NADPH/NADP+ ratio (in the cytosol) is normally about 4 [Horne, 2004: (http://jn.nutrition.org/cgi/content/full/133/2/476) (http://www.ncbi.nlm.nih.gov/pubmed/12566486?dopt=Abstract)], and the intramitochondrial NAD+/NADH ratio is normally really high. Methylmalonic and propionic acid also inhibit respiratory chain enzymes directly, and the inhibition of the TCA cycle enzymes in the context of methylmalonic and propionic acid accumulation can be viewed as being secondary to or as occurring in concert with those inhibitory effects. The result would be a decrease in the intramitochondrial NAD+/NADH ratio, and this could be expected to eventually decrease the cytosolic NADPH/NADP+ ratio. Ceconi et al. (2000) [Ceconi et al., 2000: (http://cardiovascres.oxfordjournals.org/cgi/content/full/47/3/586) (http://www.ncbi.nlm.nih.gov/pubmed/10963731?dopt=Abstract)] found, for example, that the assumptions about cellular redox couples can become invalid in response to reperfusion-induced oxidative stress. A similar situation would be expected to occur in the context of mitochondrial dysfunction in general.

I don't have time to get into the research now, but the methyl trap and, to a lesser extent, the formate starvation hypotheses, models that attempt to account for the effects of cobalamin deficiency on the folate cycle, are based on the implicit assumption that cobalamin deficiency produces no disturbances in mitochondrial functioning or in the cellular redox state. The assumption is that the metabolic effects can be explained in terms of changes in allosteric effects, such as by the allosteric inhibition of methylenetetrahydrofolate reductase by S-adenosylmethionine (in response to methionine infusion in cobalamin deficiency, etc.). But I think that the article by Smith et al. (1973) (and the other articles I cited in relation to mitochondrial function in cobalamin deficiency) are important and could help explain the greater effectiveness of reduced folates in neurodegenerative disorders or advanced disease states (in which mitochondrial dysfunction could decrease the capacities of cells to maintain intracellular folates in a reduced state).

Friday, January 23, 2009

Mitochondrial DNA, Myogenic Satellite Cells, Purines Released From Neuronal Progenitor Cells, and Reduced Folates

I know the discussion (http://hardcorephysiologyfun.blogspot.com/2009/01/myogenic-satellite-cells-mitochondrial.html) about the transfer of mutant mtDNA into myocytes, in reference to this article [Taivassalo and Haller, 2004: (http://www.ncbi.nlm.nih.gov/pubmed/15576055)], might sound far-fetched, and I suppose it's conceivable that the satellite cells differentiate into myocytes. But the article does contain a discussion about the apparent transfer, from the satellite cells to the myocytes, of nuclei and mtDNA and other intracellular constituents. The authors cite references 42, 43, 48, and 50 as evidence, and one of their citations is a book on satellite cells. This is another article that discusses the capacity of satellite cells to restore mtDNA in damaged or myocytes [Smith et al., 2004: (http://www.ncbi.nlm.nih.gov/pubmed/15576056)].

That concept of the uptake of DNA has been shown in other cell types and is thought to occur in melanosomes, I think, which are pigment-containing "granules" that are released by melanocytes but that can also contain intracellular proteins, such as proliferating cell nuclear antigen (PCNA) [Iyengar, 1998: (http://www.ncbi.nlm.nih.gov/pubmed/9585249)]. It's almost similar to or reminiscent of some axonal transport mechanisms that have been proposed, too. I know this type of thing may be controversial, but I'm just thinking out loud and thinking about the full range of mechanisms that have been discussed in these areas. And I know that perineuronal satellite cells are phenotypically distinct from myogenic satellite cells, but is a process going to be totally unique to myogenic satellite cells? There are many articles saying they're pluripotent, and presumably that would imply that some of their functions are at least somewhat generalizable to other, pluripotent or multipotent cells in different parts of the brain. Even if one rejects the mechanism of mtDNA transfer, there is some mechanism to explain the association between satellite cell proliferation and the restoration of wild-type mtDNA in myocytes.

Another reason I mention purines in the context of reduced folates and neuronal progenitor cells is that the de novo purine biosynthetic capacities of mitotic cells tend to be much higher than the capacities of postmitotic or nonmitotic cells, such as postmitotic neurons (which have staggeringly low capacities for purine biosynthesis). Here's an interesting article showing that neuronal progenitor cells derived from the subventricular zone, in the brain, release purines into the extracellular space in "bursts" [Lin et al., 2007: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1924912) (http://www.ncbi.nlm.nih.gov/pubmed/17188262)]. This would suggest that, to the extent that reduced folates, such as L-methylfolate or L-leucovorin, can increase neuronal progenitor cell proliferation [here's one article that shows that folate deficiency reduces the numbers of neuronal progenitor cells in the hippocampus, and the authors cite some other articles showing suppression of neurogenesis in response to folate depletion: Kronenberg et al., 2008: (http://www.jneurosci.org/cgi/content/full/28/28/7219) (http://www.ncbi.nlm.nih.gov/pubmed/18614692)], the cells, as shown by Lin et al. (2007), could release purines before they die off. Even though many progenitor cells do die off in the brain and do not differentiate into mature neurons, their release of purines into the interstitual fluid in the brain could reasonably be expected to exert trophic effects.

The articles on the effects of oral purines, such as this one [Kichenin et al., 2000: (http://jpet.aspetjournals.org/cgi/content/full/294/1/126) (http://www.ncbi.nlm.nih.gov/pubmed/10871303)], show the way in which purines can be transported en masse into cells and produce only transient, localized [as in the portal blood, shown by Kichenin et al. (2000)] increases in the concentrations of serum adenosine or other purines. It's only when the cells are extracted, as Kichenin et al. (2000) discussed, that the effects become apparent. The extracellular adenosine, derived from red blood cells (RBC) in rabbits given intrajejunal infusions of adenosine 5'-triphosphate, was much larger than the RBC-derived adenosine from control rabbits, and the intracellular ATP levels were also substantially increased in the RBC from the ATP-treated rabbits [Kichenin et al., 2000]. By the same token, the contribution of neural progenitor cell-derived purines to intracellular purine pools in neurons or astrocytes may be highly localized. The purines (and probably pyrimidines also) released by neural progenitor cells could then support mtDNA replication in neurons or astrocytes that have diminished pools of purines and pyrimidines, such as can result from mtDNA depletion and the mitochondrial dysfunction that tends to result from mtDNA depletion (http://hardcorephysiologyfun.blogspot.com/2008/12/heteroplasmy-mtdna-copy-number-uridine.html).

Potential Involvement of Neural Progenitor Cells in Neurotrophic Effects of Guanosine

There's some evidence that guanosine, for example, given at remarkably-low dosages, intraperitoneally, can promote remyelination by enhancing the proliferation or survival or differentiation, etc., of oligodendrocyte progenitor cells [Jiang et al., 2003: (http://www.ncbi.nlm.nih.gov/pubmed/14663211)]. Some of those types of trophic effects of exogenous purines might work together with something like reduced folates and enhance the differentiation of progenitor cells into glia or neurons.

Myogenic Satellite Cells, Mitochondrial DNA, Neuronal Progenitor Cells, Perineuronal Satellite Cells, and Reduced Folates

The authors of this article [Taivassalo and Haller, 2004: (http://www.ncbi.nlm.nih.gov/pubmed/15576055)] discuss the possibility that myogenic satellite cells in skeletal muscles, some of which are actually CD34+ and respond to erythropoietin, may be able to actually donate wild-type mtDNA to myocytes and reverse heteroplasmy, to some extent, in myocytes in people with mitochondrial disorders. The satellite cells are mitotic and may have some pluripotency, as stem cells, but the authors have found that resistance exercise (weight training), in particular, stimulates satellite cell proliferation and reverses some of the heteroplasmy of mtDNA in the skeletal muscle myocytes of people with mitochondrial myopathies. It's because the mitotic cells tend to not accumulate as much mutant mtDNA and are like a "reservoir" of wild-type mtDNA that persisted during development.

I wonder if something similar might occur in the brain, in which either perineuronal satellite cells or neural progenitor cells might either donate mtDNA to neurons or undergo apoptosis and provide purines (and pyrimidines) to adjacent cells, thereby supporting mtDNA turnover and transcription in postmitotic neurons. There's constant turnover of mtDNA in cardiac myocytes and neurons, and I think the "half-life" of mtDNA turnover is about 7 days in rodents. Presumably it's less rapid in humans. The reason I suggest this nucleotide "donation" effect, specifically with purines, is that purines can't just disappear through apoptosis. I think oxidatively-modified bases, for example, such as modified deoxyguanosine (I forget the chemical name of the common modification), have to be excreted renally. And I think there might actually be a low activity of xanthine oxidase in the brain itself, even though there's a really high level of xanthine oxidase activity in the endothelial cells lining the cerebral blood vessels. In tumor lysis syndrome, for example, the lysis of cells from radiation therapy or other causes can produce acute urate nephropathy, which is uric acid-induced acute renal failure. That's a very extreme condition, and uric acid is probably not the only nephrotoxic mediator that is produced by radiation therapy. But purines simply cannot be degraded magically, and I wonder if the rate of degradation of purines, in apoptotic perineuronal satellite cells or poorly-differentiated neural progenitor cells, might be slow enough to allow "purine-buffering," via the accumulation of purines in the extracellular fluid, by apoptotic-cell-derived adenosine, inosine, guanosine, or their nucleotides (monophosphates, diphosphates, etc.). Here's an article that discusses perineuronal satellite cells and different pathways by which subventricular-zone-derived and dentate-gyrus-derived neural progenitor cells are trafficked: [Kuhn et al., 1997: (http://www.jneurosci.org/cgi/content/full/17/15/5820) (http://www.ncbi.nlm.nih.gov/pubmed/9221780?dopt=Abstract)]. Depletion of folates from the brain has been shown to reduce the proliferation of neural progenitor cells, and reduced folates, by virtue of their capacity to cross the blood-CSF barrier, could contribute to neuronal repair by some of those types of mechanisms.

Folates, Vitamin B12, Vitamin B6, Frataxin, and Iron-Sulfur Cluster Biosynthesis

The authors of this article [Tan et al., 2003: (http://hmg.oxfordjournals.org/cgi/content/full/12/14/1699) (http://www.ncbi.nlm.nih.gov/pubmed/12837693)] discuss the role that serine hydroxymethyltransferase activity may play in iron-sulfur cluster biosynthesis. The paper is sort of theoretical, but there are other articles that lend considerable credence to the hypotheses that Tan et al. (2003) offer. Tan et al. (2003) discuss the different explanations for the function of frataxin, a mitochondrial protein whose loss of function, due to genetic mutations, produces Friedrich's ataxia. Frataxin appears to transport iron in a way that allows iron-sulfur (Fe/S) clusters, which are cube-like, "scaffolded" cofactors, containing iron bound to sulfur atoms, for mitochondrial proteins, to be synthesized normally. Tan et al. (2003) discuss the way SHMT can maintain serine availability, and serine is a substrate for more than one of the enzymatic transsulfuration reactions, which are dependent on pyridoxal-5'-phosphate, derived from vitamin B6 (pyridoxine), that produce cysteine. So maintaining cysteine availability or even, potentially, the chaperoning of cysteine, appears to be necessary for the biosynthesis of Fe/S clusters.

When Fe/S clusters are not formed properly, the result is basically mitochondrial dysfunction in the tissue that's affected. There's lactic acidosis and skeletal or cardiac muscle myopathies, etc. It's interesting that heavy-chain ferritin can increase cytosolic SHMT (cSHMT) expression. The articles that discuss iron and zinc in relation to SHMT expression discussed the phenomenon in terms of nucleotide availability and DNA replication, but it's possible that increases in the intracellular heavy-chain and light-chain ferritin contents or in the labile iron pool (mobile, exchangeable iron) increase cysteine availability for Fe/S cluster biosynthesis by increasing cSHMT expression and protein content. I know there are more articles discussing transsulfuration as basically a way of maintaining cysteine availability and as not really being "about" homocysteine "disposal." But the interactions of iron metabolism with cSHMT and mtSHMT activities is really interesting, and reduced folates, vitamin B12 (methylcobalamin), and pyridoxine are all, directly or indirectly, required for mtSHMT and cSHMT activities and for the cycling of substrates between those enzymes. It's conceivable that this role of SHMT activity in Fe/S cluster biosynthesis helps explain the relief of restless legs syndrome by high doses of folates, given that reductions in iron availability to, or "acquisition" by, the brain (in particular, different groups of dopaminergic neurons) is a major factor that is thought to underlie some cases of restless legs syndrome.

Thursday, January 22, 2009

Purines and Orotic Acid in Porphyrias and "Folate-Responsive" Neurological Disorders

This letter [Gajdos et al., 1966: (http://www.ncbi.nlm.nih.gov/pubmed/5968309)] is really interesting and shows that the accumulation of protoporphyrin and coproporphyrin, induced in rats by dietary orotic acid administration, can be reversed by exogenous adenine-based purine nucleotides (adenosine 5'-monophosphate, ADP, or ATP). There are lots of old letters and case reports of the use of intravenous AMP to treat different porphyrias [here's one: Gajdos, 1974: (http://www.ncbi.nlm.nih.gov/pubmed/4129728)], but I know the FDA banned the use of injectable AMP in either the 1970s or 1980s. I think it was the mid-1970s. Intravenous (i.v.) AMP can cause bradycardia and hypotension, among other cardiac problems, and i.v. AMP is no longer used for anything other than specific medical procedures. It's interesting, though, that Kichenin et al. (2000) found that as little as 5 mg/kg/d of intrajejunal ATP (mimicking oral administration) increased ATP levels in the red blood cells of rabbits [Kichenin et al., 2000: (http://jpet.aspetjournals.org/cgi/content/full/294/1/126) (http://www.ncbi.nlm.nih.gov/pubmed/10871303)]. Either 3 or 20 mg/kg bw did not produce hypotension or bradycardia, did induce peripheral vasodilation, and produced bronchodilation, elevation of the arterial PaO2 (the partial pressure of oxygen in the arterial blood), and a decrease in the respiratory rate. There's reason to think some of these effects would occur, to some extent, in humans, and I'm not going to go into all the research now. But to administer adenine-based purine nucleotides orally, it would be necessary to provide the purine in some form other than a "slow-release" tablet. If the purine were released slowly, all of it would be converted into uric acid by the intestinal epithelial cells. Additionally, only some purine salts and only purine nucleotides, as opposed to the dephosphorylated nucleosides, are soluble. I know in the research on oral guanosine, the researchers first used guanosine and found that the low solubility limited the dosage they could give the rats (because they could only prepare water solutions with x molar guanosine, and the rats could only drink so much water every day). They had to start using guanosine monophosphate disodium.

Kichenin et al. (2000) discuss a little bit about the way red blood cells can make and maintain their purine pools via ATP derived from glycolysis, and the orotic acid was shown by Gajdos et al. (1966) to produce porphyria by inducing ATP depletion and adenine nucleotide depletion, more broadly, in red blood cells. I know the transport of iron into mitochondria of erythroblasts is ATP-dependent, and scientists have hypothesized that mitochondrial activity is required for ferrochelatase activity. I suppose the purines could, in some cases, interfere with the cell cycle in reticulocytes or other erythrocyte precursor cells and thereby indirectly suppress heme biosynthesis, as a byproduct of a cytostatic effect. But I don't think that would occur in vivo, given the rapidity with which adenosine is metabolized. In cell culture studies, large concentrations of exogenous hypoxanthine or other purines can rescue folate-depleted erythroblasts. It's conceivable that the purines suppress de novo purine biosynthesis in erythrocyte precursor cells and spare ATP by that mechanism.

It's interesting that folate and cobalamin (vitamin B12) deficiencies can cause orotate accumulation. People with mutations in glutamate formiminotransferase (GFIT) can have orotic aciduria and megaloblastic anemia, and one group of researchers hypothesized that the folate- and cobalamin-depletion-induced decreases in GFIT activity could reduce histidine recycling and thereby reduce histidine availability for hemoglobin biosynthesis. Exogenous uridine could reasonably be expected to suppress orotate accumulation, given that uridine nucleotides inhibit de novo pyrimidine (uridine) formation of uridine at the carbamoyl phosphate synthetase II step, upstream of orotic acid formation. I'm not up for collecting the references from my computer now. Researchers have also sometimes found evidence that histidine loading can be used to either diagnose or treat anemias due to folate deficiency, but that sounds like a bad idea to me. The concern I would have would be with the potential for histamine biosynthesis to increase in mast cells and cause adverse effects. Histamine can be stored in mast cells, and histidine loading could conceivably produce prolonged, mast-cell-mediated injury to the blood-brain-barrier or other tissues, etc. Histamine can produce secretagogue-like effects on mast cells and cause them to release much more destructive mediators (proteases, pro-inflammatory cytokines, etc.). I can at least link to one of the articles showing folate or cobalamin depletion can increase orotate accumulation. It's an old article, but the findings are actually consistent with the effects that result from decreases, in response to folate or cobalamin depletion, in the activity of GFIT (and possibly other folate-derived-cofactor-dependent enzymes): [Van der Weyden et al., 1979: (http://www.ncbi.nlm.nih.gov/pubmed/465362)]. Folate and cobalamin depletion are also well-known to reduce purine levels in proliferating cells, in particular, and one article shows folate reduces purine salvage in the liver.

I'm actually starting to think that some of the decreases in purine salvage that occur in response to folate and cobalamin depletion, especially in tissues such as the liver and brain, could be the result of mtDNA depletion. I've linked to multiple articles on this in past postings, and the decreases in mtDNA replication or transcription occur fairly rapidly in the liver (within four weeks, I think, in rats). The effect would take longer in humans, but I still think it's possible. Some of the effects probably do have to do with the accumulation of AICAR (ZMP) and the nucleotide depletion that ZMP can produce. Small increases in ZMP could produce phosphate sequestration, and AICAriboside (and possibly ZMP, too, by binding to the cAMP binding site) can inhibit S-adenosylhomocysteine hydrolase, etc. There could be a "cascade" of different mechanisms that would lead to feed-forward depletion of purines. But that dissociation between megaloblastic anemia and neurological symptoms, in either cobalamin or folate depletion, is subjectively reminiscent of the situation in mitochondrial disorders. Different types of proliferating cells can sometimes be affected and show heteroplasmy, and the degree of heteroplasmy can increase and decrease in mitotic cells, but the cardiac myocytes, skeletal muscle myocytes, liver, and central nervous system tend to be severely affected in people with mtDNA mutations. Even though that picture doesn't completely fit with the picture that is seen in cobalamin and folate deficiencies, some of those articles on folate-responsive neuropathies show profound muscle weakness and other symptoms reminiscent of mitochondrial disorders.

A more interesting question is this: If folate or cobalamin depletion can produce neurological symptoms and fatty liver (mitochondrial injury) or muscle weakness and can also produce mtDNA damage and depletion (which may or may not be causal in the symptomatology), then why do exogenous folates improve the neurological symptoms? One could argue that the reduced folates replenish the purine and pyrimidine pools and reduce further accumulation of mitochondrial uracil in DNA, but would that really explain a therapeutic effect in an acquired condition that is characterized by large deletions in mtDNA, as shown in animal studies? Given what's known about mtDNA, I don't see why it would be possible to treat such severe damage to mtDNA in tissues such as the brain.

It's possible that there are other mechanisms, such as the effects on gluconeogenic and glycolytic enzymes, as I've mentioned previously. I don't know. Maybe there isn't that much mtDNA damage in a lot of cases, and the reduced folates simply replenish the purine (and thymidine) pools. Purines have strong, trophic effects in the brain.

It's also conceivable that reduced folates increase the proliferation of neuronal progenitor cells and that even the apoptotic progenitor cells donate their purine pools to adjacent cells, thereby buffering the intracellular purine pools in adjacent cells and increasing the trophic effects that occur by extracellular actions of the purines.