Friday, July 31, 2009

Mechanisms Underlying the Regulation of Serum Parathyroid Hormone Concentrations by Phosphate Availability

This article [Whybro et al., 1998: (http://www.ncbi.nlm.nih.gov/pubmed/9481529)] is great, and the authors found that 1000 mg of extra phosphorus, provided by a sodium phosphate supplement that was administered for a week, didn't significantly decrease serum calcium or increase serum phosphate (there were nonsignificant decreases and increases, respectively) and didn't produce very large increases in serum parathyroid hormone (PTH) concentrations. There are other articles showing similarly-marginal effects of not-excessively-high phosphate supplements, and one interesting thing that Whybro et al. (1998) noted is that an increase in PTH in the presence of an increase in phosphate availability to osteoblasts and to the extracellular fluid in the bone tissue, for example, has been shown to suppress the bone resorption induced by PTH. Phosphate supplementation could conceivably cause electrolyte abnormalities in the long term and could deplete magnesium or prevent its absorption or cause other problems, but the problems I see with phosphate supplementation don't really have to do with the supposed effects on the bones but with the fact that all of these awful effects can emerge, in the long term, as a result of phosphate depletion. And phosphate supplementation, as I've discussed in previous postings, could, in the short term, exacerbate some of those problems.

The arguments that many authors make about phosphate homeostasis are not compelling, in part because such large percentages of people are overtly or almost-overtly deficient in vitamin D or have low dietary intakes of utilizable phosphate (phytates do not provide much usable phosphate, as discussed in past postings). Vitamin D depletion tends to reduce phosphate absorption and retention by multiple mechanisms, many of which do not have to do with the regulation of serum hormonal vitamin D, as discussed below, but have to do with the actions of hormonal vitamin D that is produced and that acts locally, in many (or most) different tissues, in an autocrine and paracrine manner. And, as Whybro et al. (1998) noted, one can't simply look at an increase in serum PTH and say that bone resorption is occurring, in part because phosphate can suppress the "undesirable" effects of PTH. Very high PTH levels are undesirable, but Whybro et al. (1998) discussed research showing that an adequate concentration of extracellular phosphate can inhibit osteoclast differentiation from osteoclast progenitor cells. The authors also note that PTH levels tend to increase after meals and to fluctuate significantly throughout the day, and some of the markers that have been used as indicators of bone resorption have not been especially reliable or "bone-specific" (Whybro et al., 1998).

Another important thing is that the decreases in serum calcium that may occur in response to phosphate supplementation are thought to result from the phosphate-induced decreases in renal 1alpha-hydroxylase activity, thereby lowering the levels of serum 1alpha,25-dihydroxyvitamin D (serum hormonal vitamin D), and the phosphate-induced inhibition of calcium absorption. I've seen articles in which the authors simply state that phosphate, given in small dosages, decreases serum calcium by causing ectopic calcification, immediately. That's not true, in my opinion, and the [Ca] x [Pi] product has to be something like 58, usually, localized in the extracellular fluid of a given tissue, before that effect occurs. That's not to say that phosphate supplementation couldn't conceivably contribute to calcification, because I think it could, particularly in the kidneys, at higher dosages, especially. But, for example, Kemi et al. (2006) [Kemi et al., 2006: (http://www.mm.helsinki.fi/mmkem/RAV/tutkimus/calcium/dokumentit/2006/06_high_phosphorus_intakesf.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/16925861)] used large, single dosages of phosphate in people on diets very low in calcium and noted, in table 2, that the Ca:P ratio decreased drastically as the doses of phosphate and total daily phosphorus intakes increased. One has to actually do the calculations, based on the ratios and not on the wording in the methods section, to see that the total, daily calcium intake, for all of the dosages of phosphate, was roughly 250 mg per day. That's very low, and it's almost a given that most of the people in the study were probably vitamin D deficient. In view of the extreme and potentially devastating aspects of phosphate depletion, I think that there is a real danger in pronouncing judgment on phosphate and drawing hasty conclusions. There's a considerable amount of research showing that low phosphate intakes or vitamin D deficiency can actually contribute to osteomalacia and osteoporosis. The effects of calcium supplementation don't impress me at all, and I don't supplement with calcium anymore, personally. There's also the potential for thrombogenic effects from elevations in serum calcium, even within the reference range. Researchers should be focusing on ways of safely supplementing with magnesium and phosphate and vitamin D and focus less on the overrated effects of calcium supplementation. I do think sodium phosphate supplementation is potentially problematic, and a person would obviously not want to supplement with phosphate or vitamin D or magnesium or anything else before discussing that with his or her doctor.

Thursday, July 30, 2009

Interactions of Phosphate Metabolism With Energy Metabolism and Adenosine Metabolism

These are some articles showing that phosphate availability can be an important factor that determines the rates of salvage of purine nucleotides and nucleosides, the adenylate charge, and the rate of deamination of adenosine to inosine [Matsumoto et al., 1979: (http://www.jbc.org/cgi/reprint/254/18/8956.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/479172); Lockett et al., 1995: (http://www.ncbi.nlm.nih.gov/pubmed/8579734); (http://scholar.google.com/scholar?hl=en&q=energy+%22inorganic+phosphate%22+salvage+purine+OR+adenylate)]. Matsumoto et al. (1979) discussed the fact that inorganic phosphate [Pi, or PO4(3-)] normally inhibits adenosine monophosphate (AMP) deaminase activity, thereby preventing the catabolism of adenosine to inosine. This catabolism, however, can serve to maintain the energy charge, paradoxically, during the inhibition of energy metabolism (Matsumoto et al., 1979). But even when ATP levels are being maintained "normally," the sequestration or loss of intracellular phosphate tends to lead to the loss of adenosine nucleotides (reference 6, cited in 1979). Maj et al. (2000) found that the adenosine-induced inhibition of adenosine kinase (AK) activity, which is a major purine salvage enzyme in the brain and other tissues, decreases as the inorganic phosphate concentration increases. AK is sometimes viewed as being "bad" in the context of cerebral ischemia, and AK inhibitors can reduce brain damage due to ischemia by maintaining adenosine availability, etc. That's another reason that the provision of phosphate in the form of ATP disodium or another purine nucleotide might be advantageous. Phosphate depletion tends to produce a loss of adenosine (and, by extension, guanosine) nucleotides, and phosphate supplementation could have a mixture of beneficial and less-than-beneficial effects, particularly in the short term, on purine metabolism in the brain, for example. Increasing AK activity (meaning the phosphorylation of adenosine) without providing more exogenous adenosine could tend to decrease adenosine availability for cerebral blood flow autoregulation [Sciotti and Van Wylen, 1993: (http://www.ncbi.nlm.nih.gov/pubmed/8436611); (http://scholar.google.com/scholar?hl=en&q=%22adenosine+kinase%22+brain). It's partly because the concentrations of adenosine, both intracellularly (and extracellularly), are normally far lower than the Km of AK for adenosine. The same argument could be made in the case of the phosphate-mediated inhibition of AMP deaminase. Under conditions of low-level ischemia, as in a person with ATP depletion or purine nucleotide depletion (because of repeated cycles of ischemia or pronounced activation of the noradrenergic stress-response system in the brain), the degradation of AMP to IMP can, paradoxically, be "good," up to a point. In any case, there can even be strange short-term effects, in my opinion, of ATP disodium that could be explained, in part, by those paradoxical aspects of adenosine metabolism. Additionally, some few days may be required for changes in A1 adenosine receptor density or sensitivity to occur, even though extracellular adenosine levels are generally kept almost constant (in part by A1 adenosine receptor activation) [Andresen et al., 1999: (http://www.ncbi.nlm.nih.gov/pubmed/10490889)]. For example, A1 adenosine receptor antagonists tend to increase extracellular adenosine (Andresen et al., 1999) and can also have mood elevating effects or the like. Although the steady-state levels of extracellular adenosine and the sensitivities of adenosine receptors will generally adapt efficiently, in my opinion, to changes in stimulus-evoked increases in extracellular adenosine (exogenous ATP would be expected to primarily or almost exclusively augment stimulus-evoked extracellular adenosine concentrations and not steady-state extracellular adenosine levels), those adaptations could, in my opinion, require a day or two to take place. In my experience, there was some kind of threshold dosage, in the short term, above which there were no transient periods of somnolence or the like. I don't even know how I'd describe that type of thing, but my point is that there's some steady state that's reached and that there could be, in my opinion, potential for complex interactions with phosphate homeostasis. And the other point was that the use of sodium phosphate could disturb adenosine metabolism in the short term (and potentially the long term), even if one could not say that the effects are exclusively "bad."

Wednesday, July 29, 2009

New Articles by Hiramoto and Colleagues

These are a couple of the newer articles [Hiramoto et al., 2009: (http://www.ncbi.nlm.nih.gov/pubmed/19184072); Hiramoto, 2009: (http://www.ncbi.nlm.nih.gov/pubmed/19500182)] on the effects of ultraviolet radiation (UVR) on trigeminal ganglion neurons, etc. I just came across these today and haven't had a chance to look through them in detail. The one showing that optic nerve denervation blocks the effect of UVA is probably not actually showing that, and it may be that the fibers of ciliary ganglion neurons are being severed. There are articles discussing the difficulty in sparing or even seeing parts of the ciliary ganglia during operations in humans. There's no transmittance of UVA or UVB past the lens and iris. Well, I guess 1-2 percent in the UVA wavelengths can reach part of the vitreous humor [Sliney, 1997: (http://cat.inist.fr/?aModele=afficheN&cpsidt=2855724)], but there's no way that amount could reach the retina and produce biological effects, in my opinion. The articles showing immediate-early gene expression in parts of the retina, following UVR, are showing a polysynaptic effect of some kind, in my opinion, and I don't think there could be a "coherent" effect of something that could, by some fluke, reach those cells by diffusion from cells in the anterior chamber, etc. Another thing is that some ciliary ganglion neurons actually do innervate the corneas in rodents, and that may well account for the past research showing that ciliary ganglionectomy could antagonize the effects of UVR exposure. But the effects on the trigeminal ganglion neurons are likely to predominate, in my view.

Phosphate Diabetes and the Renal Threshold Phosphate Concentration: Relevance to Chronic Fatigue Syndrome and Fibromyalgia, etc.

In this article [Laroche et al., 2009: (http://www.ncbi.nlm.nih.gov/pubmed/19148564)], Laroche et al. (2009) discussed so-called "phosphate diabetes." Phosphate diabetes is an old term that now encompasses genetic forms of vitamin D-resistant rickets and, from what I can tell, idiopathic forms of cellular phosphate depletion. The condition is distinct from severe hypophosphatemia, I guess, because the serum phosphate levels tend to be (but do not have to be) lower than 0.85 mM. The 24-hour urinary phosphate excretion also tends to be more than 20 mmol in phosphate diabetes, but the phosphaturia threshold, or renal threshold phosphate concentration, is used in the actual diagnosis and should be less than 0.83. The renal threshold phosphate concentration is a "normalized concentration" (TmPO4/GFR). The explanations for these terms are using terms that are very confusing. TmPO4 is the (estimated) concentration of tubular phosphate (because you can't easily measure the tubular fluid phosphate concentrations) at which the renal tubular reabsorption rate is maximal, and all of the superfluous PO4 will be excreted in the urine at a tubular phosphate concentration higher than that threshold value. The GFR is easily obtained, and the TmPO4/GFR is the TmPO4 normalized to the GFR by the use of a normogram (it's not an equation, and one has to draw a line through a chart). I can't get the original full text of the article showing the normogram, and most of the articles that cite it don't show the thing. Here's the chart in a book that's unprintable (http://books.google.com/books?id=9gvBlktAT6YC&pg=PA618&lpg=PA618&dq=%22renal+threshold+phosphate%22+TmPO4+GFR&source=bl&ots=L17iWRr9PQ&sig=c7eHPGGeQSWGaiMg3tztkmd7SrA&hl=en&ei=4rhwSvL8LpnmlQecsun_Dw&sa=X&oi=book_result&ct=result&resnum=17) One has to calculate the TRP (expressed as a percentage), first, and, to calculate the TRP, one has to know the fasting urinary creatinine and phosphate concentrations and the fasting serum phosphate and creatinine concentrations. It would be nice to have an equation, even a complex one, instead of a "gram." It seems that a high TmPO4/GFR means that the urinary phosphate loss is high, in relation to the total amounts of water being filtered per unit time, and that the person is almost overtly hypophosphatemic. Of course, one has to assume that the normogram is valid, but at least one might use that to attempt to make some kind of quantitative evaluation of subtleties in phosphate homeostasis. At least there's something.

I'm not going to go into a lot of detail in this posting, but there are some old articles that discuss fibromyalgia-like symptoms in "phosphate diabetes" [Amor et al., 1995a: (http://www.ncbi.nlm.nih.gov/pubmed/7788334); Amor et al., 1995b: (http://www.ncbi.nlm.nih.gov/pubmed/7788335)]. I can't get the full texts of those, and they look, at first glance, to be odd articles. But they're actually very much consistent with other research on the effects of phosphate depletion. The condition also has been found in people with chronic fatigue syndrome, and De Lorenzo et al. (1998) [De Lorenzo et al., 1998: (http://pmj.bmj.com/cgi/reprint/74/870/229.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/9683977)] discuss research [one or both of the articles by Amor et al. (1995)] showing that the pain and other symptoms of phosphate depletion may not resolve after the serum phosphate has increased or normalized and may require more time to be relieved. I've discussed the risks of phosphate supplementation in past postings, and phosphate can bind magnesium and calcium and prevent their absorption, etc. If one used any extra phosphate, at low doses under a doctor's supervision, for some abnormality or set of symptoms that a doctor had confirmed or suspected, one might consider using supplemental magnesium. The dosages of magnesium required to overcome the binding to phosphate could be significant, and I don't know what they might be. The same thing could conceivably be true in the case of supplementation with oral ATP disodium, although the dosages of that would probably be low enough to not provide a big amount of phosphate.

To say that someone has phosphate diabetes is, as far as I can tell, basically like saying phosphate depletion, and it can cause pain and osteomalacia, etc. The authors of a lot of the articles cite research claiming that phosphate diabetes occurs more in men, but that doesn't make sense to me. Maybe the equation/gram is only valid in men because they used men in 1975 to derive it. I don't know, but fibromyalgia is more common in women, I think, than in men. So maybe they're not looking for phosphate depletion in women with those symptoms. Researchers have also compared the musculoskeletal pain in vitamin D deficiency to the pain in fibromyalgia [see Holick, 2004, and others: (http://scholar.google.com/scholar?hl=en&q=%22vitamin+D%22+pain+fibromyalgia)]. The association of chronic fatigue syndrome with phosphate depletion is also consistent with the hypothesized association of chronic fatigue syndrome with impairments in the adrenergic regulation of energy metabolism [Bains, 2008: (http://www.cfids-cab.org/MESA/Bains.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/18684570)]. That's a great article. This is a bad search, but there's some research showing that sympathomimetics can increase phosphate reabsorption [such as LeClaire et al., 1992: (http://scholar.google.com/scholar?q=adrenoreceptor+phosphate+transport+OR+uptake&hl=en)], and adrenergic agonist drugs tend, from what I've read so far, to increase cellular phosphate uptake. But chronic stress and the adrenergic activation associated with it, as discussed by Bains (2008), can produce hypophosphatemia in the long term. One could make the argument that phosphate depletion might be one factor sustaining ATP depletion in the brain in chronic fatigue syndrome, for example, because phosphate depletion can produce ATP depletion and hypoxic and ischemic injuries to the brain, in extreme cases. In more mild cases, the result could be the "pseudodepression" discussed by Amor et al. (1995a) in the context of phosphate diabetes, etc.

But De Lorenzo et al. (1998) give the impression that phosphate supplementation tends to not necessarily be all that effective or to have predictable effects, and that's the impression I've gotten from other articles. I tend to think ATP disodium might be more useful, given that adenosine nucleotides can serve a kind of phosphate buffering effect (as researchers have discussed), along with magnesium (via the retention of adenosine nucleotides as MgATP2-, etc.), and that magnesium deficiency is thought to impair parathyroid hormone (PTH) secretion by reducing adenylate cyclase activity (which is magnesium-dependent) in the parathyroid tissue [Bush et al., 2001: (http://www.ncbi.nlm.nih.gov/pubmed/11767924)]. There are other reasons, but, to the extent that phosphate depletion can lead to adenosine nucleotide depletion, the cAMP levels in parathyroid tissue might, as in magnesium deficiency-associated hypocalcemia, be lower in the context of phosphate depletion and might mean that the "safeguarding" effect of PTH secretion, in response to the potential decrease in serum calcium that could occur in vulnerable individuals taking exogenous phosphate, could be compromised in the context of phosphate depletion. The use of ATPNa2 instead of sodium phosphate could conceivably compensate for that, but, above some dosage, supplementation with a source of exogenous phosphate, in any form, would probably start to cause some problems.

Tuesday, July 28, 2009

Hemostasis Due to Hemolysis in Hypophosphatemia; Interplay of Venous Ischemia and Thrombosis in Intracranial Hypertension

The authors of this editorial [Gladwin and Kato, 2008: (http://www.haematologica.com/cgi/content/full/93/1/1)(http://www.ncbi.nlm.nih.gov/pubmed/18166776)] discussed research showing that measures of the rate of intravascular (as opposed to intramedullary, I suppose) hemolysis, in people who had had sickle-cell disease, had correlated positively with markers of hemostasis and the activation of the coagulation cascade. This type of research (http://scholar.google.com/scholar?hl=en&q=hemolysis+thrombosis+OR+thrombogenic+OR+hypercoagulability+OR+coagulopathy), along with that editorial and accompanying article [Ataga et al., 2008: (http://haematologica.com/cgi/content/full/93/1/20)(http://www.ncbi.nlm.nih.gov/pubmed/18166781)], suggests to me that the hemolysis that not-uncommonly occurs (http://scholar.google.com/scholar?hl=en&q=hemolysis+hypophosphatemia) in the context of hypophosphatemia, for example, could contribute to some of the complications associated with hypophosphatemia (or with the depletion of intracellular phosphate). It's interesting that hemolysis can produce transient elevations in serum phosphate, but hemolysis per se, as in the context of autoimmune hemolytic anemia, can cause hypophosphatemia, given that the kidneys simply excrete more phosphate in response to the hemolysis-induced elevations. Over time, those types of cycles of phosphate loss, through hemolysis or exercise-induced glycolytic activity or glycolytic activity due to thrombosis due to hemolysis, would lead to hypophosphatemia. A lot of the articles on exercise and serum phosphate report hypophosphatemia, but it's conceivable to me that hypophosphatemia-induced hemolysis could "falsely" and intermittently elevate serum phosphate to normal levels and confound attempts to diagnose the condition. Thatte et al. (1995) [Thatte et al., 1995: (http://www.ncbi.nlm.nih.gov/pubmed/7573122)] discussed research showing that hemolysis can induce hyperphosphatemia. The thrombogenic state associated with hemolysis could contribute to the intracranial hypertension that researchers have associated with hypophosphatemia and could contribute to exercise-induced rhabdomyolysis, given that rhabdomyolysis has commonly been reported in the context of hypophosphatemia.

Part of the reason this is interesting is that the combination of thrombocytopenia, which is not uncommon in association with hypophosphatemia (http://scholar.google.com/scholar?q=thrombocytopenia+hypophosphatemia&hl=en), with hemolysis-induced thromboses and activation of the coagulation cascade is an especially dangerous combination and is one reason that heparin-induced thrombocytopenia, for example, is such a dangerous and difficult-to-manage condition. My understanding of it is that thromboses are constantly reforming and undergoing thrombolysis and that this causes small injuries and basically amounts to diffuse ischemia and hypoxia. But one explanation for the bleeding that accompanies thrombogenic conditions, such as heparin-induced thrombocytopenia, is that localized smooth muscle cell necrosis results from localized thromboses and that the small-scale reperfusion or recanalization at the site of "arterionecrosis" or microcapillary damage produces the hemorrhages. Then platelets adhere to the site of the microhemorrhage, etc. Gladwin and Kato (2008), cited above, discussed the fact that the scavenging of nitric oxide by deoxyhemoglobin can be drastically increased during hemolysis and that that, along with the generation of reactive oxygen and nitrogen species by the hemoglobin, can then produce ischemia. Nitric oxide also generally inhibits platelet function, etc., and that effect would be decreased during hemolysis.

That type of discussion, in the editorial by Gladwin and Kato (2008), is helpful to me, as far as understanding the elevations in intracranial pressure that result from so many different causes and that occur in so many different contexts. In a lot of the cases of intracranial hypertension due to retinoids or excess vitamin A (http://scholar.google.com/scholar?hl=en&q=%22intracranial+pressure%22+ischemic+OR+ischemia+OR+ischaemia+OR+ischaemic), the condition is described as being an ischemic encephalopathy or something similar. To say that intracranial hypertension is caused by or is associated with venous sinus thrombosis is basically to say that it's partly sustained by ischemia, for example, and it's almost inconceivable to me that there could ever be a total absence of ischemia, mild or not-mild, in the context of intracranial hypertension. The article by Gladwin and Kato (2008) gets at that. Thromboses, by definition, impair tissue oxygenation and would be expected to impair blood-flow regulation by augmenting the rate of formation of reactive oxygen and nitrogen species,etc. This doesn't necessarily mean that every effort to restore cerebral blood flow would be effective in the treatment of intracranial hypertension, but I think it's an important consideration. Also, one way of describing this might be to consider the distinction between venous ischemia and arterial ischemia, and intracranial hypertension has been shown to occur in association with venous ischemia (http://scholar.google.com/scholar?hl=en&q=venous+sinus+thrombosis+%22venous+ischemia%22+intracranial+pressure); (http://scholar.google.com/scholar?hl=en&q=%22venous+ischemia%22+intracranial+pressure); (http://scholar.google.com/scholar?hl=en&q=%22venous+ischemia%22+cerebral+OR+intracranial+OR+brain); (http://scholar.google.com/scholar?hl=en&q=%22venous+ischemia%22)].

Monday, July 27, 2009

Phytates (Inositol Hexaphosphate and Related Compounds), in Plant-Based Foods, as Poor Sources of Utilizable Phosphate

The authors of this article [Johnson and Tate, 1969: (http://article.pubs.nrc-cnrc.gc.ca/ppv/RPViewDoc?issn=1480-3291&volume=47&issue=1&startPage=63)], a poorly-chosen but adequate one, discuss the fact that phytates in grains and plants and plant proteins are inositol polyphosphates, and they're not digested effectively in non-ruminants (i.e. humans and non-cow mammals, etc.). Phytic acid is the major one, and it's just myo-inositol hexaphosphate, a.k.a. inositol hexakisphosphate. But some of the other "phytates" are just inositol phosphorylated at one or more different oxygens, etc. It's really astonishing to me that this issue isn't discussed more than it is (I've never seen discussions of the extremely low digestibility of phytates), even apart from any consideration of the importance of phosphate. Iqbal et al. (1994) [Iqbal et al., 1994: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1375699&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/7959229)] discussed the fact that the phytase (inositol polyphopsphate phosphohydrolase) activity in the human stomach and small intestine is, as is known, not produced by any human enzyme with phytase activity but by yeast or bacterial phytases found in foods. This is a great article, in which Letcher et al. (2008) [Letcher et al., 2008: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=18684107)(http://www.ncbi.nlm.nih.gov/pubmed/18684107)] discussed the fact that phytates are unlikely to be transported into cells to a significant extent, because, upon their absorption, the compounds would form complexes with calcium and other divalent cations that are very low in solubility, etc. Human cells can hydrolyze inositol polyphosphates, but maybe there's something about the orientations/conformations of the phosphate linkages that precludes their hydrolysis by human enzymes (inositol triphosphate and other inositol phosphates act as second messengers and exert other signalling functions, etc.). But Letcher et al. (2008) make it sound as if dietary phytates don't even yield much phosphate, assuming they're absorbed at all, because of poor uptake into cells. I found some articles showing that meat essentially provides much more nutritionally-available phosphate than plant protein sources (plant foods that provide phosphate in phytate compounds) do, and that's the sense I'm also getting from these articles. So the nutrition information on the "phoshorus" content (from phosphate) in a plant-protein-based food is potentially going to be essentially incorrect, given that little utilizable phosphate can be provided by phytate compounds. Lopez et al. (2002) [Lopez et al., 2002: (http://cat.inist.fr/?aModele=afficheN&cpsidt=13944404)] discussed research showing that 80 percent of the phosphorus in plants or seeds exists in phytate compounds. The authors of this article [Sandberg et al., 1987: (http://jn.nutrition.org/cgi/reprint/117/12/2061.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/2826727)] found that 58 percent of phytate(s) were hydrolyzed to inositol pentaphosphates (IP5's) and IP4's and IP3's, and that makes it sound as if microbial phytase activity can release variable amounts of phosphate. But one has to consider the fact that the IP5's or IP4's might just bind to divalent cations and precipitate, etc. And even the phytates that are absorbed by paracellular diffusion (many compounds can be absorbed by passive diffusion, to a larger extent than most articles imply or state) are not likely to yield much utilizable phosphate and may very well just be excreted in the urine (Letcher et al., 2008), etc.

That research is important, given that phosphate depletion could contribute to osteomalacia (see article, in the previous posting, on the capacity of calcium supplementation to bind phosphate in the GI tract and contribute to osteomalacia by inducing phosphate depletion) and all sorts of other problems, over time (see previous postings). I'm not even suggesting that eating meat protein and obtaining phosphate from meats is desirable or undesirable. I'm just trying to get a handle on what the actual amounts of utilizable phosphate are in different food groups.

Sunday, July 26, 2009

More Information on Phosphate Homeostasis

In this letter [Roestel et al., 2004: (http://ajp.psychiatryonline.org/cgi/content/full/161/8/1499-a)(http://www.ncbi.nlm.nih.gov/pubmed/15285984)], Roestel et al. (2004) discuss research showing that serum phosphate correlates negatively with symptoms of anxiety and with physical complaints, etc. Roestel et al. (2004) cite this article [Maddock et al., 1987: (http://www.ncbi.nlm.nih.gov/pubmed/3659218)] as evidence of that, and there's a considerable amount of research showing that hypophosphatemia commonly is associated with various psychiatric disorders (http://scholar.google.com/scholar?hl=en&q=hypophosphatemia+psychiatric+OR+psychiatry), especially panic disorder. Some of those articles discuss the capacity of an elevated rate of lactate formation, from various tissues, to contribute to phosphate depletion, and that's been discussed in the context of exercise-induced phosphate depletion (http://scholar.google.com/scholar?hl=en&q=hypophosphatemia+lactate+exercise). Adrenaline, by its activation of beta-adrenoreceptors, in particular, is also thought to contribute to phosphate depletion in the long term and to short-term decreases in serum phosphate, given that adrenergic stimulation tends to increase the uptake of phosphate into cells (i.e. skeletal muscle cells, etc.). The same effect could result from chronic psychological stress, given that even anticipated stress can increase the firing rates of noradrenergic neurons in the locus ceruleus, for example. The authors of a lot of articles discuss the fact that hypophosphatemia tends to be associated with irritability, even in people who do not have panic attacks or whatever other symptoms (http://scholar.google.com/scholar?q=hypophosphatemia+irritable+OR+irritability&hl=en). As vague as that may sound, a lot of these articles discuss those types of excessively excitatory states in the context of hypophosphatemia or relative phosphate depletion. It's interesting that Nanji et al. (1985) [Nanji et al., 1985: (http://www.ncbi.nlm.nih.gov/pubmed/4045178)] suggested that phosphate depletion may have been a cause and not (just) a consequence of liver failure/liver-cell necrosis in one case, and Quirós-Tejeira et al. (2005) [Quirós-Tejeira et al., 2005: (http://www.ncbi.nlm.nih.gov/pubmed/16101727)] found that the normalization of serum phosphate levels paralleled the normalization of liver function in children who had experienced liver damage. Fructose is known to induce ATP depletion by sequestering phosphate, for example, and the point is that hypophosphatemia may actually be, in part, a cause and not just a consequence of some of these conditions that have been associated with hypophosphatemia.

I tend to think that something like ATP disodium might be safer than sodium phosphate for correcting phosphate depletion (in part because some it is likely to be absorbed intact, by paracellular diffusion, and because oral ATP has been shown to elevate adenosine diphosphate and monophosphate levels in the portal venous blood in animals, implying that less of the ATP-derived phosphate might remain in the GI tract and bind calcium and magnesium; there are several other reasons I say that), but it's important to discuss this with one's doctor. Phosphate can inhibit the absorption of calcium, magnesium, iron, and probably other minerals and could, particularly but not exclusively at higher doses, cause life-threatening hypocalcemia in people whose mineral metabolism is already deranged, etc. The kidneys and GI tract and calciotropic hormone system also tend to adapt rapidly to phosphate supplementation, and it seems like it would be more effective, in the long term, in my opinion, to consider some kind of intermittent "challenge" with something like ATP, in combination with resistance exercise, than to try to use sodium phosphate. I actually don't know much about phosphate supplementation, though, and I've never used sodium phosphate and don't intend to. Bremner et al. (2002) [Bremner et al., 2002: (http://www.ncbi.nlm.nih.gov/pubmed/12135811)] found that 4 grams per day of sodium phosphate (Na2HPO4) increased serum phosphate by 30 percent and caused a 25 percent increase in the (intracellular) 2,3-bisphosphoglycerate (BPG) levels in red blood cells (RBC's). BPG alters the affinity of hemoglobin for oxygen and improves the "unloading" of oxygen from hemoglobin, and that's a fairly significant effect. Many other articles have shown similar effects, and Bremner et al. (2002) noted that the steady-state increase in RBC BPG appears to require two or three days to occur at that dosage range (3-4 grams per day). Bremner et al. (2002) also found that the RBC BPG levels didn't correlate with the serum inorganic phosphate concentrations, and that absence of a correlation between serum phosphate and intracellular phosphate levels has been found by many other researchers for many other cell types, such as skeletal muscles, etc. I've discussed that in past postings, and I've also discussed the fact that calcium supplementation tends to bind phosphate in the GI tract and could thereby cause phosphate depletion. Heaney and Nordin (2002) [Heaney and Nordin, 2002: (http://www.ncbi.nlm.nih.gov/pubmed/12074251)] discuss that, and it's interesting that a lot of the assumptions about phosphate metabolism and phosphate intake have ended up being much more complicated than people had previously thought they would be. Hypophosphatemia can cause lactic acidosis, for example, but the correction of the acidosis with bicarbonate has the same effect as BPG depletion from RBC's has on oxygen unloading from hemoglobin and can compound the existing BPG depletion in people who have hypophosphatemia, with disastrous consequences [Jacob, 1975: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1129800&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/1136448)]. But phosphate supplementation can gradually elevate serum bicarbonate and cause compensated metabolic alkalosis. Those types of interactions with acid-base regulation could account for some of the cases in which hypophosphatemia has been associated with idiopathic intracranial hypertension (or, rather, not-idiopathic) (http://scholar.google.com/scholar?hl=en&q=hypophosphatemia+papilledema+OR+%22intracranial+pressure%22+OR+pseudotumor+OR+pseudotumour+OR+%22intracranial+hypertension%22). The rate of CSF formation by the choroid plexuses can be dysregulated by acid-base dysequilibria across the blood-CSF barrier and has been treated by acetazolamide and other carbonic anhydrase inhibitors [see Horovitz et al., 1985: (http://scholar.google.com/scholar?hl=en&q=acid+base+dysequilibrium+papilledema+OR+%22intracranial+pressure%22+OR+pseudotumor+OR+pseudotumor+OR+%22intracranial+hypertension%22)]. It's conceivable that some of the psychiatric symptoms associated with hypophosphatemia are actually, in part, a result of mild elevations [as discussed in this posting, for example: (http://hardcorephysiologyfun.blogspot.com/2009/07/dimming-of-vision-in-depression.html)] in intracranial pressure. There are other paradoxical aspects of phosphate homeostasis, such as the fact that alkaline phosphatase activity is pH-sensitive, to a potentially-important degree, and is inhibited by inorganic phosphate. So phosphate derived from the hydrolysis of pyrophosphate, by smooth-muscle cell alkaline phosphatase, would tend to increase the risk of calcification, by locally diminishing the inhibitory effect of pyrophosphate, but phosphate depletion could conceivably augment alkaline phosphatase activity by causing metabolic acidosis and excessive phosphate turnover, because of adenosine nucleotide deamination in the context of wild swings, up and down, in intracellular free inorganic phosphate concentrations? I don't know, but it's complicated. And the relationship between serum phosphate or extracellular fluid phosphate and calcification risk is "U-shaped," as various authors have noted. In any case, a lot of the research is interesting.

Saturday, July 25, 2009

Vitamin D and Phosphate: Relevance to Muscle Weakness in Vitamin D Depletion & Significance of Lipoprotein Binding of Orally-Administered Vitamin D

Chudley et al. (1981) [Chudley et al., 1981: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1862641&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/6793223)] noted that muscle weakness and low-level indications of peripheral neuropathy (and overt neuropathy, but the authors don't really discuss that) can occur in people who are hypophosphatemic, and it's interesting that researchers have commonly found muscle weakness and balance problems (predisposing to falls in elderly people) and muscle pain to be associated with vitamin D deficiency (given that vitamin D supplementation is used to treat hypophosphatemia due to a number of different causes). There's actually evidence that the phosphate depletion that results (in part) from the secondary hyperparathyroidism that, in turn, results from vitamin D deficiency does contribute to muscle weakness and the associated problems (muscle pain, balance problems) (http://scholar.google.com/scholar?q=phosphate+%22vitamin+D%22+muscle+weakness&hl=en). It actually seems likely that the muscle weakness could partly be due to neuropathy induced, in part, to the phosphate depletion that accompanies vitamin D deficiency. I should mention that I don't think vitamin D (or rather the elevations in 25-hydroxyvitamin D) produced in response to UVB is as calcemic as oral vitamin D is, and I base that statement on various lines of evidence. I think it might be that oral vitamin D is converted into 25-hydroxyvitamin D and then into 1alpha,25-dihydroxyvitamin D locally, in enterocytes in the GI tract (the cytochrome P450 enzyme(s) that display(s) vitamin D 25-hydroxylase activity is/are expressed in a wide variety of tissues), or it may be a result of the fact that the oral vitamin D becomes more highly bound to lipoproteins [Haddad et al., 1993: (http://www.pubmedcentral.nih.gov/picrender.fcgi?pmid=8390483&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/8390483)]. That could cause its transport into cells to be regulated in different ways, etc. That article by Haddad et al. (1993) is great, and they say that vitamin D3 produced in the skin (and not just 25-hydroxyvitamin D produced in the liver) is primarily bound to vitamin D binding protein (Gc globulin), but orally-administered vitamin D becomes bound to chylomicrons and lipoproteins and enters the liver much more rapidly than skin-derived vitamin D. The issue wouldn't just be the rate of transport into the liver, though, because the lipoprotein-bound vitamin D and 25-hydroxyvitamin D would enter cells in ways that would not be subject to the same endosomal regulatory mechanisms, etc., as the transport of vitamin-D-binding-protein-bound vitamin D or 25-hydroxyvitamin D would be subject to.

Regardless of the mechanisms, there does seems to be some difference that, in my opinion, makes oral vitamin D more calcemic, and that could be important in the context of some of these issues related to the effects of vitamin D repletion on phosphate homeostasis. I remember reading an old article in which someone suggested that some of the neuroprotective effects of vitamin D, in the context of in vitro or animal experiments, might be mediated by its effects on phosphate homeostasis (on the preservation of ATP, as a result of its phosphate-"sparing" effects and effects on phosphate transport, etc.), but I forget what the article was specifically testing (and I can't find it right now). There would be a limit to the supposed beneficial effects of vitamin D repletion on phosphate homeostasis, and the concomitant elevations in serum calcium could, to some extent, negate the benefits associated with the vitamin D-mediated reductions in urinary phosphate loss (an effect that is, in part, secondary to the localized conversion of 25-hydroxyvitamin D into hormonal vitamin D, in the parathyroid tissue, and autocrine/paracrine suppression of parathyroid hormone release, etc.).

Friday, July 24, 2009

Epstein-Barr Virus, Proximal Tubule Dysfunction, Fanconi's Syndrome, Phosphate Depletion, and Multiple Sclerosis

Fanconi's syndrome is a set of pathological effects, including hypophosphatemia and proteinuria and electrolyte abnormalities and urinary amino acid loss, that result from proximal tubule dysfunction or, more generally, renal tubular acidosis and that can be caused by drugs or any number of other factors. Although the authors of many articles state that Fanconi's syndrome is rare and typically only occurs in people who have multiple myeloma, it is possible that primary Epstein-Barr Virus (EBV) infection could cause Fanconi's syndrome or pathological effects akin to Fanconi's syndrome. EBV infects proximal tubule epithelial cells (http://scholar.google.com/scholar?hl=en&q=%22Epstein-Barr%22+%22proximal+tubule%22+interstitial), and researchers have suggested that the inflammatory immune response to EBV-infected proximal tubule cells could account for cases of EBV-induced interstitial nephritis. A large amount of evidence has implicated late, primary EBV infection as being an important factor that may contribute to the development of lupus in some people (http://scholar.google.com/scholar?hl=en&q=%22Epstein-Barr%22+lupus), and the autoimmune hemolytic anemia that is characteristic of infectious mono (http://scholar.google.com/scholar?hl=en&q=%22Epstein-Barr%22+hemolytic+anemia) could contribute to proximal tubule dysfunction and acidosis, etc. EBV infection has caused outright renal failure, also [Federman and Hammer, 1996; Davies et al., 1980: (http://scholar.google.com/scholar?hl=en&q=%22Epstein-Barr%22+%22renal+failure%22)]. The infection of epithelial cells of the biliary tree by EBV could also contribute to renal tubular acidosis and conceivably produce Fanconi's syndrome, given that liver dysfunction tends to lead to kidney disease of one kind or another. Although the authors of many articles tend to say that EBV does not infect biliary epithelial cells, other authors accept that EBV almost certainly infects biliary epithelial cells and epithelial cells in many other organs, etc. If it doesn't infect any cells in the liver, why are there massive numbers of reports of hepatitis following EBV-induced (as opposed to the more rare, CMV-induced) mononucleosis (http://scholar.google.com/scholar?hl=en&q=%22Epstein-Barr%22+mononucleosis+hepatitis+OR+hepatic+OR+hepatocellular)? Come on. (Of course, there's often an obligatory statement, in each of those countless articles, stating that the complication is "rare.") Also, IgG light-chain nephropathy can cause Fanconi's syndrome [Messian et al., 2000: (http://www.ncbi.nlm.nih.gov/pubmed/10844934)], such as in people who have hematological malignancies, and primary EBV infection is essentially a self-limiting lymphoproliferative condition, a self-limiting B-cell lymphoma.

It's interesting that hypophosphatemia, which is a major effect of Fanconi's syndrome, has been treated with oral phosphate and relatively high doses of vitamin D3 for many years [a bad example, and I'm talking about people who have hypophosphatemia that is not caused by the genetic disorder of vitamin D-resistant rickets: Berkelhammer and Bear, 1984: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1875686&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/6418367)]. Haglin (2004) [Haglin, 2004: (http://www.ncbi.nlm.nih.gov/pubmed/15236787)] suggested that phosphate depletion could contribute to the development of multiple sclerosis, and there is a large amount of evidence that EBV infection may interact with other factors to contribute to the development of MS, etc. It's possible that the osmoregulatory disturbances and hypophosphatemia that can result from proximal tubule dysfunction, such as might result from and persist in response to primary EBV infection, could contribute to the development of MS. Vitamin D is widely known to reduce urinary phosphate loss, by reducing parathyroid hormone levels and by other mechanisms, and that could be one mechanism underlying the supposed protective effects of UVB-induced elevations in serum 25-hydroxyvitamin D, in the context of the etiology of MS.

Hypophosphatemia can produce outright hypoxic brain injuries (by depleting 2,3-bisphosphoglycerate/2,3-diphosphoglycerate) and not just cerebral ischemia [Jacob, 1975: (http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1129800&blobtype=pdf)(http://www.ncbi.nlm.nih.gov/pubmed/1136448)], and bicarbonate administration can be disastrous in people with hypophosphatemia (Jacob, 1975). I generally think ATP disodium would be more effective and potentially safer than sodium phosphate for the correction of hypophosphatemia, in some cases, but that's just my opinion. I say that because phosphate depletion causes ATP depletion, and phosphate, by itself, would be expected to not-especially-effectively restore, in my opinion, the adenosine nucleotide pools that have been lost as a result of chronic hypophosphatemia or cellular phosphate depletion. One should discuss any of these things with one's doctor, and oral phosphate supplementation can be very problematic and dangerous, especially in people in any disease state of any kind. But a key point that I discussed in a past posting is that ATP depletion can result from phosphate depletion and not produce any significant decrease in serum phosphate levels, for example. In many of the instances in which researchers have stated that phosphate depletion is rare or that Fanconi's syndrome is rare, the statements have presumably been made on the basis of serum phosphate levels or on some sort of measurement of urinary phosphate excretion, in response to phosphate ingestion or another stimulus. But it's not possible to rule out phosphate depletion by looking at those indications, in my opinion. Thus, any statement that phosphate depletion is a rare cause of such-and-such a condition is unlikely to be a defensible statement, from a logical or rational standpoint. I don't think that phosphate depletion is necessarily all that common, but the point is that, as noted by Jacob (1975), phosphate depletion may go undetected and contribute to encephalopathies in people with alcoholism and diabetes, etc. Phosphate depletion can also cause immunosuppression, and, paradoxically, immunosuppression is thought to be able to contribute to the development of lupus and other autoimmune diseases (to the extent that immunological factors contribute in primary ways to the progression of MS), such as by reducing the levels of circulating anti-idiotypic antibodies (naturally-occurring, circulating anti-IgG IgG's that prevent the binding of autoantibodies to proteins and other targets), etc. [see Berchtold et al., 1989: (http://bloodjournal.hematologylibrary.org/cgi/content/abstract/74/7/2414)]. That's not the only mechanism underlying that paradoxical phenomenon, in any case.

Thursday, July 23, 2009

Phosphate in Meats, etc.

The author of this article [Massey, 2003: (http://jn.nutrition.org/cgi/content/full/133/3/862S)(http://www.ncbi.nlm.nih.gov/pubmed/12612170?dopt=Abstract)] discussed the fact that the phosphorus in meat exists mainly in phosphates but that the phosphorus in vegetable proteins exists mainly in phytate compounds. Some of the phosphate in meat is apparently bound to proteins (Massey, 2003), and I'm assuming the author is referring to phosphate bound covalently to proteins (phosphorylated proteins). Massey (2003) provides a figure of 20 mg phosphorus per gram of meat protein (~ 61 mg PO4 per g meat protein) (assuming phosphate is about 32.6 percent phosphorus). Not all of that PO4 is going to be derived from phosphorylated proteins, though, because polyphosphates (mixtures of diphosphates and triphosphates, evidently) are apparently added to many meats, to enhance the capacities of the meats to hold water and to increase the pH of the meats [Dusek et al., 2003: (http://cat.inist.fr/?aModele=afficheN&cpsidt=14847707)]. Dusek et al. (2003) found that the average concentration of free, water-soluble phosphate (provided as pyrophosphate, or P2O5) in meats (the authors also provided a large table showing the amounts in specific meats) was 16 mg/g protein. So 50 grams of meat protein could provide 800 mg of phosphate. Dusek (2003) provided a figure of 10.6 mg P per gram of protein, as a figure that should be used to calculate the amount of protein-bound phosphate in meat (p. 765). I'm reasonably certain that Dusek (2003) are not referring to phosphorus pentoxide (a "corrosive" powder that doesn't look like it could be hydrolyzed enzymatically by mammals or any other life forms, but I could be wrong about that: (http://www.wuzhouchem.com/cataloged/inor/phosphorus_pentoxide.htm)]. The description of the phosphate as P2O5 is a way of describing the stoichiometry and does not refer to some nonexistent, free compound with P2O5 as its structural formula [see here: (http://books.google.com/books?id=71g3AAAAMAAJ&pg=PA55&lpg=PA55&dq=P2O5+meat&source=bl&ots=m4FtWBiE-y&sig=XIszbFeyN-lLs7yb04bh_ytRNBs&hl=en&ei=ymZoSveMJIH4NIjS-PUM&sa=X&oi=book_result&ct=result&resnum=18)]. Regardless of the precise amounts of phosphate, the point is that a diet high in meat protein could conceivably provide a gram or more of phosphates. And one can't really estimate his or her phosphate intake by looking at the dietary "phosphorus" intake. It's astonishing to me that one so seldom sees descriptions of the actual phosphate contents or phytate contents of foods. The provision of the "phosphorus content" really doesn't tell one much.

That type of information may be important, given that marginal hypophosphatemia can produce really serious problems. I was reading about that, but I'm not up for citing a lot of articles. There's research showing that phosphate depletion basically produces effects that mimic those of a mitochondrial disorder. PO4 depletion tends to cause ATP depletion and can cause cardiomyopathy, encephalopathy, liver disease, myopathy, deficient immune function, and the more commonly-seen manifestation of hemolysis. But the cardiomyopathy, myopathy, and encephalopathy are the problems, in postmitotic cell types, one sees in mitochondrial disorders. It's also the case that an increase in glycolytic activity tends to increase phosphate uptake into cells and that phosphate activates glycolysis, etc. Another interesting thing is the fact that both, for example, glutamine depletion and phosphate depletion can occur following intense exercise or surgical trauma, etc. But glutamine and some other free amino acids can produce a hypophosphatemic effect, both by increasing phosphate uptake into cells (i.e. utilization for growth, etc., as in the "refeeding syndrome") and, probably, by increasing the urinary excretion of PO4. An excessive PO4 intake can also cause metabolic alkalosis, and PO4 depletion can cause metabolic acidosis. But glutamine, for example, has been shown to cause alkalosis or, rather, an acute increase in serum bicarbonate and also a hypophosphatemic effect and hypocalcemic effect (this effect is more pronounced with alpha-ketoglutarate and other ketoacids). Ketoacids, for example, have been used to reduce parathyroid hormone levels, but they also can reduce serum phosphate (an effect that is not produced by the reduction in parathyroid hormone (PTH) levels, given that an acute increase in PTH increases urinary phosphate excretion) and calcium levels. So it's kind of a catch 22, given that glutamine supplementation may, in my opinion, in vulnerable individuals, tend to drive PO4 into cells and decrease serum PO4, in theory, exacerbate the alkalosis that can result from PO4 supplementation. But PO4 supplementation may (or may not) increase serum PO4 and may, like glutamine (more so with some of those ketoacids), potentially, decrease serum calcium and produce metabolic alkalosis.

In any case, vitamin D supplementation seems, in my opinion, to be a safer way to deal with resistance exercise-induced phosphate depletion than phosphate supplementation is. ATP disodium, more than nucleotide monophosphates, could start to provide significant amounts of phosphate at high doses (ATP disodium is 48.4 percent "phosphate" by mass, or something close to that). It would mainly become something to consider at doses higher than those that anyone would use, but, at lower doses, that (the contribution of ATP-derived phosphate) might have some useful effects in the context of resistance exercise, etc. I think sodium phosphate supplementation sounds very problematic, in general, and I've discussed that in past postings.

I actually think that researchers or whomever might consider administering oral nucleotide triphosphates instead of monophosphates or considering the issues that relate to phosphate depletion or sequestration (such as by uridine, derived from triacetyluridine (TAU), or the ribose-induced activation of glycolysis), even temporarily or intermittently, given that the phosphorylation of nucleosides can increase phosphate utilization or sequester phosphate. The turnover of nucleotides is very dynamic, and the issue is not likely to be phosphate sequestration as much as the potential for the gradual depletion of phosphate or increase in phosphate utilization or turnover. Ribose is known to enhance the activities of glycolytic enzymes, by its entry into the nonoxidative pentose cycle and conversion into glycolytic intermediates, and phosphate availability has been shown to be limiting for PRPP formation under some circumstances. The ribose-induced increases in uric acid are not nearly as great as those produced by the fructose-induced phosphate sequestration and ATP depletion, and ribose does not appear to deplete ATP or purine nucleotide pools, in general, very much. But, over time, there could be a ribose-induced increase in phosphate turnover or utilization. I came across research showing that uridine can decrease inorganic phosphate levels, acutely, in ex vivo tissue preparations, I think, by its incorporation into uridine nucleotides (monophosphates, diphosphates, and triphosphate). Those effects occurred at high concentrations, but it's conceivable to me that the usefulness of TAU in mitochondrial disorders could, over time, be compromised by the gradual depletion of phosphate, either from the uridine-induced increases in glycolytic activity or the incorporation of uridine and uridine-derived pyrimidines into nucleotide pools. I think it would mainly occur in vulnerable individuals, in disease states, and the same would be true for any supposed effects of ribose. Adenosine has also been shown to increase glycolytic activity, such as by the AMP-induced activation of phosphofructokinase, I think, if I remember correctly, or by a "physiological" activation of AMPK and the AMPK-activation-induced transcription or phosphorylation of glycolytic enzymes. It could become really important, because glycolytic activity is already going to be accelerated in many of the disease states in which nucleotides would be expected to be therapeutic. In the case of glutamine, glutamine has been shown to preserve the purine nucleotide pools in the heart during ischemia and to reduce glycolytic glucose utilization, by serving as an alternate energy substrate. And phosphate availability can determine glutaminase activity (the activity of glutaminase, which is "phosphate-activated glutaminase" in many cell types, can become dependent on phosphate availability) and hence the capacity of glutamine to actually be deaminated and serve as an energy substrate, upon its entry into the TCA cycle as alpha-ketoglutarate or as other intermediates that its carbons are incorporated into. So glutamine could, under some circumstances, reduce phosphate turnover, but the potential for a hypophosphatemic effect, in some individuals, could produce unpredictable and tissue-specific decreases or changes in phosphate availability. I think some of these factors could become really important in some situations, as in disease states, given that phosphate depletion can mimic the effects of mitochondrial disorders. Resistance exercise, in the long term, can increase red blood cell 2,3-bisphosphoglycerate and serum phosphate levels but can decrease those levels in the short term, for example. These are just my opinions, however, and people with kidney disease or other conditions would want to be especially careful to regulate their phosphate intakes. Even the small amounts of phosphate from nucleotides could affect serum phosphate and calcium balance, under those circumstances. Another thing to remember is that any phosphates can inhibit iron and magnesium and calcium absorption, and one could consider taking those types of supplements a few hours apart. The main thing with any phosphate sources is not to provide a large amount at any one time, in any one dosage. The acute phosphate nephropathy has mainly been reported in the context of the 23-gram dosages of sodium phosphate, given twice daily, in preparation for some medical procedures. But there's no downside to being extra cautious with sources of phosphate. The amounts of phosphate provided in commonly-used dosages of nucleotide triphosphates, such as ATP disodium, are similar to or less than those found in small servings of meats or milk, but, again, it wouldn't hurt, in my opinion, to spread the total dosage out across the day, etc.

Monday, July 13, 2009

Note on Posting "Times"

Incidentally, I don't post things on here at 4:40 am or other times like that. For some reason, the "clock" on the blog was set for Pacific time instead of central, and I keep forgetting to change it (and I don't put a high priority on navigating the horrendous interface of the Blogger software to find out, again, how to change the "clock").

Visible Light and Anterior Chamber-Associated Immune Deviation

This article [Streilein et al., 2000: (http://www.ncbi.nlm.nih.gov/pubmed/11268357)] is fantastic, and the authors discuss anterior chamber-associated immune deviation (ACAID) ("ocular immune privilege") in relation to neuropeptide (CGRP, VIP, alpha-MSH, etc.) release by nerve fibers innervating the iris and ciliary body. I'm not entirely clear on the mechanisms, but I think ACAID is essentially similar, in some ways, to the systemic immunosuppression induced by UVB. The neuropeptides reduce the costimulatory capacities of antigen-presenting cells migrating from the eyes and thereby cause antigen-specific tolerance that's mediated by regulatory T-cells, I think. CGRP also can directly inhibit some functions of resident (?) macrophages or of monocytes infiltrating the eyes, etc. There's the "lymphatic" drainage of aqueous humor into the circumferential veins (meaning there's the absence of lymphatic drainage and of the potentially-pro-inflammatory APC maturation-inducing signals and mechanisms, taking place in lymph nodes, for some of these antigen-presenting cells migrating from the eyes) and into the spleen that's a key factor in ACAID (animals have to have an intact spleen to develop ACAID in some of these experiments). Some interesting classes of regulatory T-cells, such as natural-killer T-cells (in this case, they're like "immunosuppressive loose cannons" or something, because I think the functions of natural killer T-cells are only very loosely regulated), appear in the spleen following UVB, but I don't think (as far as I know, it's not) that the spleen is required for UVB-induced systemic immunosuppression. In any case, on pages 302-303, Streilein et al. (2000) suggest that long periods of darkness (of 48 hours or longer) cause the effects of melatonin on antigen-presenting cells at different points in the eye to become dominant over the effects of transforming growth factor-beta2 (TGFb2) and decrease or interfere with the maintenance of ACAID. It's interesting that there's thought to be a vitamin D response element (VDRE) in the TGFb2 promoter [as discussed in Lamprecht et al., 2003: (http://www.ncbi.nlm.nih.gov/pubmed/12894248); (http://scholar.google.com/scholar?hl=en&q=%22vitamin+D%22+VDRE+TGF+beta2+promoter)], and TGFb2 and many other aspects of TGFb2 signalling are, in my view, strongly interconnected with vitamin D receptor (VDR) signalling. (It's difficult to search on Greek letters in that search engine, and there's probably some trick, to doing it, that I'm not aware of.) Streilein et al. (2000) also discuss the fact that visible light in the green wavelengths (500-510) is thought to contribute to the maintenance of ACAID. Here's a section in which Streilein et al. (2000) discuss their thoughts on ACAID in relation to primary afferent neurons (the reason I couldn't include this in the paper, for example, despite its similarity to the research on the CGRP-dependence of UVB-induced systemic immunosuppression, is that explaining these researchers' thoughts on the role of trigeminal fibers, innervating the cornea, in the visible-light-mediated maintenance of ACAID would have required another two pages to even convey what mechanisms could account for this type of effect--and I'm still not sure what the mechanism would be, though I think the UVB-mediated mechanisms seem more clear than those that would explain the actions of visible light):

"The capacity of the normal eye to support ACAID induction is dependent upon integrity of afferent nerves from the cornea, as is the ability of iris and ciliary body to secrete an immunosuppressive ocular microenvironment. We suspect that tonic signals arise from the corneal surface. These signals are propagated through the trigeminal ganglion to the central nervous system, where they promote, via connections to the nervous supply of the iris and ciliary body, the intraocular secretion of immunomodulatory neuropeptides" (Streilein et al., 2000, p. 302).

The pathways are not clearly defined at all, though. I don't see how visible light could induce action potentials in trigeminal ganglion neurons innervating the corneal surfaces, as UVB does. Streilein et al. (2000) noted that researchers had found that transection of the optic nerve hadn't abolished the visible light-mediated maintenance of ACAID. I have little doubt that visible light does contribute to ACAID, and it may be that the pathway involves some circuitous pathway involving the preganglionic cholinergic neurons in the Edinger-Westphal nucleus. I don't know very much about the anatomy. Here's a search on visible light and ACAID (http://scholar.google.com/scholar?q=visible+light+%22immune+deviation%22+OR+%22immune+privilege%22&hl=en)]. That's not a very revealing search, and maybe some researchers have researched mechanisms, without explicit discussions of the immunological implications, by which visible light can influence VIP and CGRP release. I don't know if there are anatomical pathways by which retinal ganglion cells, activated in response to visible wavelengths, can influence, independently of the optic nerve, different classes of neurons in the suprachiasmatic nucleus or other parts of the brain. These authors don't discuss the possible anatomical pathways very much [Warren et al., 2003: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2435209)(http://www.ncbi.nlm.nih.gov/pubmed/12752771)], but maybe visible light can induce neuropeptide release from the peripheral terminals of trigeminal ganglion neurons by some polysynaptic pathway(s) that do not require transmission along optic nerve fibers. Or maybe the "efferent" arm of the visible-light-induced release of VIP and CGRP into the iris and ciliary body requires the indirect actions of ciliary ganglion neurons on trigeminal ganglion neurons innervating those parts of the eye (I mean that a ciliary ganglion neuron innervating the iris and ciliary body might depolarize and induce afferent action potentials in "one" trigeminal ganglion neuron and thereby induce a dorsal root reflex that would induce an efferent action potential in a second TG neuron and thereby release CGRP into the iris, etc.). Maybe that's not possible, but I'm just suggesting that it might not be necessary for the polysynaptic pathway, induced by the supposed optic-nerve-independent actions of visible light, to influence TG neurons by actions on the central terminals of TG neurons in the caudal trigeminal nucleus. I don't know, and the anatomy is really complex and not all that well-understood, it seems. I'm not trying to say that "anything goes" as far as the anatomical pathways are concerned. I'm just discussing the topic openly. Berson et al. (2003) (http://scholar.google.com/scholar?hl=en&q=%22Strange+vision%3A+ganglion+cells+as%22+Berson) discussed the fact that some intrinsically-photosensitive retinal ganglion cells project to and form direct synaptic connections with neurons in the olivary pretectal nucleus and thereby influence the firing rates of preganglionic neurons in the Edinger-Westphal nucleus and, as a result, also the firing rates of ciliary ganglion neurons. Vakapoulos (2005) [Vakalopoulos, 2005: (http://scholar.google.com/scholar?hl=en&q=Vakalopoulos+%22the+unconscious%3A+A+proposal+for+the%22)] discussed some of the ways in which retinal ganglion cells could influence visual "perceptions," of one kind or another, in blind people. Even though the effects of visible light on ACAID can, apparently, occur independently of transmission by optic nerve fibers, it seems that there could be both optic-nerve-dependent and independent pathways by which neurons in the ciliary ganglia could, via synaptic inputs to neurons in the Edinger-Westphal from neurons in the suprachiasmatic nucleus, participate in ACAID [(http://scholar.google.com/scholar?hl=en&q=%22edinger+westphal%22+preganglionic+suprachiasmatic); (http://scholar.google.com/scholar?hl=en&q=%22ciliary+ganglion%22+%22anterior+chamber%22+iris+%22ciliary+body%22)].

I think I remember seeing someone discussing something about the capacity of retinal ganglion cell activation to mediate circadian effects independently of transmission by optic nerve fibers, but I might well be wrong about that. Warren et al. (2003) discuss the fact that the action potentials induced in retinal ganglion cells, in response to their depolarization by visible wavelengths, seem to suggest the involvement of a photoreceptive protein that's a member of the transient receptor potential family of ion channels that can serve as photoreceptors in invertebrates (http://scholar.google.com/scholar?hl=en&q=TRP+%22transient+receptor+potential%22+%22retinal+ganglion+cells%22). It's interesting that capsaicin induces action potentials in C-fibers by binding to the type 1 vanilloid receptor (VR1), which is a member of the family of TRP channels in humans and other mammals. That's not much of a statement and doesn't mean anything, by itself, but it seems as if there's some crude overlap between some of these primitive mechanisms and the mechanisms that operate in higher mammals.

I was initially going to mention, again, the possibility that small amounts of UVB or UVA had been in the light sources that researchers had used to evaluate visible-light-mediated ACAID maintenance, but it sounds like they used very specific wavelengths within the visible spectrum.

Search

I came across this type of search, by chance, awhile back. I don't really know anything about these types of topics, but here's a search.

http://scholar.google.com/scholar?hl=en&q=stroke+cervical+manipulation+vertebral+OR+vertebrobasilar

Sunday, July 12, 2009

Potential Modulation of the Efferent Activities of Visceral Primary Afferent Neurons Following Cutaneous UVB Exposure

I was going to mention that the more general implication of those articles discussing the anatomical convergence or near-convergence of the central terminals of visceral and somatic primary afferent neurons is that, under the "best" (or worst, if one assumes that UVB has no place in the biology of life on Earth) of circumstances (outside of a disease state), UVB could augment efferent activity in primary afferent neurons innervating visceral sites and influence large numbers of physiological processes that way. For example, the bone marrow, arteries, heart, liver, lymph nodes, etc. are innervated by primary afferent neurons. Maybe there would be some site-specificity to the effects (meaning that one dermatomal region could influence CGRP release at some visceral sites but not others), but I don't know that that would necessarily be the case. One implication is that, in all of the research looking at 25-hydroxyvitamin D levels in relation to atherosclerosis, the 25-hydroxyvitamin D levels could be a surrogate marker. The changes in visceral neuropeptide release, such as one might, in theory, expect to see in response to UVB exposure, could be influencing the phenotypes and degranulation "patterns" of perivascular mast cells and influencing atherosclerosis by those mechanisms. It could be that neurogenic inflammation is not suppressed in any generalized way by UVB but that it's "refined" or "recoupled" to specific stimuli, so that there's not this constant, low-level "waste" of ongoing, endless mast cell degranulation or piecemeal degranulation (that may contribute to atherosclerosis). I'm not saying these suggestions necessarily have any basis in fact or reality, at this point, and I'm not trying to say that UVB can influence everything in some fundamental way. I'm just throwing out thoughts that follow from some of the mechanisms discovered by researchers. I'm not a scientist, and I don't have any kind of over-inflated sense of the importance of things I talk about in a blog, etc. Nothing I do is going to be of any importance or significance. It's not that there's anything fundamentally more important about these mechanisms than there is about more thoroughly-established mechanisms. It's just meant to be an exploration of possible consequences of these mechanisms, should they be shown to occur in humans. I'd urge anyone to discuss all of these issues with one's doctor before doing anything, and I'm not suggesting that anyone should do anything at all.

Open Discussion on Multiple Sclerosis, UVB, and Energy Metabolism

I was going to mention that one way of looking at the age-dependence of the protective effect of "migration" in multiple sclerosis (migration to an equatorial latitude) is that the axonal transport capacity or generalized "robustness" is greater in people who are relatively "younger" than in people who are older. There's also a latitude gradient in Epstein-Barr Virus-associated malignancies (Burkitt's lymphoma, etc.), but it's not clear if that's explainable in terms of the lesser extents to which people living in Subsaharan Africa have access to health care and nutrition, etc. (http://scholar.google.com/scholar?hl=en&q=Burkitt%27s+lymphoma+latitude). That's not a great search. But the point is that that could provide indirect support to the ideas people have had about late EBV infection being associated with abnormal neural development in the offspring of people who have had late EBV infections (see past postings and the articles cited in them). I just don't think one can ignore that epidemiology (the latitude gradient and research on migration) in relation to multiple sclerosis, and I don't think it's written in stone that there's no capacity for protection after age 15 or something (no capacity for protection to occur in response to migration or to the environmental, protective factors(s) that have been associated with migration to equatorial latitudes). I've seen research suggesting, for example, that migration may be protective through age 27 or something. That would tend to imply that migration *could* be protective at later ages but that people's behaviors, related to time spent outdoors or to nutritional factors that would interact with that or to activity levels and all other things, have become ingrained, perhaps, by the time people are 16 or 17. On average, how many people radically change everything about their habits after age 18? I'm talking about drastic changes in UVB exposure, etc. I mean, I'm sorry to say it (and I'm in no way suggesting that people go out in the sun without talking with their doctor), but 10 minutes of sun exposure at noon, on the hands and face, is not likely to increase serum 25-hydroxyvitamin D levels all that much and is not going to have the kinds of immunomodulatory effects that depend on hundreds of billions of neutrophils infiltrating the UVB-irradiated skin and the immunosuppressive cytokine milieu associated with it. In my past comments, the main thing I wanted to convey is that I don't think people should expect magic from UVB exposure and that there are many other factors that come into play in the etiology of multiple sclerosis. But, for example, Epstein-Barr Virus infects keratinocytes (http://scholar.google.com/scholar?hl=en&q=Epstein-Barr+keratinocytes), and I don't have to say what that means. It means that there's the *potential* for the induction of tolerance to EBV latent and lytic cycle proteins on a kind of mass scale, following UVB exposure. But there's also the potential for that tolerance to turn into seriously aberrant, Th2-driven immunity and to worsen matters. It's difficult to control or predict the responses, and that's especially true in disease states (in which there's potential for disastrous effects). But, then again, UVB is known to suppress both Th2 and Th1 immunity (the Th1/Th2 dichotomy is a bit outdated but still has some usefulness as a crude framework for looking at these things). But my point is that the release of Th1 cytokines almost disappears, in some cases, from the lymph nodes of UVB-irradiated animals, and the idea that everything boils down to vitamin D and hands-and-face, anemic, Victorian-Era, parasol-carrying approaches is a bit absurd to me. That said, I can't make any recommendations on these things, because I can't give medical advice and, to say the least, can't make any guarantees whatsoever about safety. Avis et al. (1995) (http://scholar.google.com/scholar?hl=en&q=sudden+death+sun+exposure+%22multiple+sclerosis%22) discussed case(s) of people with multiple sclerosis dying after sitting in the sun.

I think it's telling that thermoregulatory dysfunction features prominently in multiple sclerosis, but I don't claim to know how it relates to the supposed UVB-mediated trigeminohypothalamic thermoregulation that may occur in humans. Part of the difficulty is that, for example, the neuropathological effects of Epstein-Barr Virus, in some extreme case studies, can be highly diffuse, and that could be explained in any number of ways (in terms of "diffuse" B-cell infiltration or latent infection of astrocytes and microglia and other perivascular, monocyte-macrophage-lineage cells). I tend to think it's a result of astrocytic infection by EBV, but that's just my opinion. The relapsing-remitting quality could be explained in terms of the devastating effects that pro-inflammatory cytokines can have on energy metabolism, and that could explain the apparent absence of overt inflammation in some research in multiple sclerosis (I can't say anything more specific without looking at the specific articles that people have cited, and I don't want to do that now).

I just don't understand why there would be such resistance to the consideration of all the mechanisms at work. I've seen articles make statements that there's no problem with energy metabolism in the brains of people who have multiple sclerosis. That makes no sense, in my opinion, because axonal degeneration implies profound problems with energy metabolism. And if one buys into the idea that a lot of pro-inflammatory cytokines are being released from activated T-cells infiltrating the CNS, then one would expect major problems with energy metabolism from that. Here's a not-very-good search that shows some of the vast amounts of research showing rapidly-induced mitochondrial dysfunction induced by TNF-alpha and other pro-inflammatory cytokines (http://scholar.google.com/scholar?hl=en&q=TNF+mitochondrial+dysfunction+astrocytes). There's one article, there, in which the authors probably discuss the concept that, in my opinion, optic neuropathies are frequently associated with mitochondrial dysfunction and can be caused by that. The energetic demands of neurotransmission in the optic nerve fibers are enormous. I'm not saying that reducing pro-inflammatory cytokine production by T-cells or other cell types doesn't have the potential to improve energy metabolism. I just think that it would help to acknowledge the deficits in energy metabolism that are very likely to exist in a neurodegenerative disease, such as multiple sclerosis, and to try to develop therapeutic strategies for addressing those deficits in more direct ways. These are just my crude, unrefined thoughts on some of these topics, and I'd strongly urge anyone to discuss things with one's doctor before doing anything.

I'm deliberately discussing some of these things from an idealistic, somewhat impractical point of view, because dogged pragmatism and dogma haven't seemed to be all that beneficial. Even something like parenteral guanosine could be viewed as an energy-metabolism-based strategy, because de novo purine biosynthesis is metabolically costly (and the brain has very little capacity for de novo purine biosynthesis). Also, anticonvulsant medications and adenosine receptor activation or modulation are known to be able to increase or preserve the phosphocreatine to creatine ratio and the adenylate charge. Additionally, purine nucleotide availability is likely, in my opinion, to be a limiting or nearly-limiting factor in mtDNA replication and in other aspects of mitochondrial functioning. I'm not saying guanosine would be a cure-all, but I'm just saying that there are many ways to address energy metabolism.

Elevating uric acid (UA) levels intracellularly, in neurons and astrocytes, such as through the administration of exogenous, parenteral purine nucleotides, would, in my opinion, have the potential to be therapeutic as a result of, among other mechanisms, the UA-mediated improvements in mitochondrial functioning (as a result of peroxynitrite scavenging). That article I cited awhile back, on UA in relation to the sympathetic nervous system and goal-oriented behavior, found intramitochondrial UA levels of 60 uM or something. That's very significant, in my opinion. The peroxynitrite scavenging effects of UA may not look all that special or unique in some articles, but one has to consider the fact that the suppression of nitric oxide (NO) output from activated macrophages, by UA, can occur in the face of these massive increases in the output of iNOS-derived NO. I forget what the variable was--NO output or NADPH oxidase activity--that increases the most dramatically. I think the mRNA or protein content of iNOS can increase 20-50 fold or something, in activated macrophages, and I think the NO output can increase by something like 1000-fold or even more. Some articles show these little graphs of the effects of UA (suppression of NO output by cultured monocyte-macrophage-lineage cells, etc.), and the graphs don't capture what's going on. The UA can suppress NO output drastically at physiological concentrations (of UA), and that's no small feat. NO and peroxynitrite produce strongly detrimental effects on mitochondrial function [a lot of these articles show, rather incidentally or in a manner that doesn't showcase the effects of UA, that UA can ameliorate peroxynitrite-induced mitochondrial dysfunction in various cell types: (http://scholar.google.com/scholar?hl=en&q=mitochondrial+peroxynitrite+urate+OR+uric)].

Saturday, July 11, 2009

Note on Earlier Posting

I meant to say that this article by Niederkorn and Wang (2005), in this search result (http://scholar.google.com/scholar?hl=en&q=CGRP+placenta+immune), compares ocular immune privilege to maternal-fetal immune privilege in the context of CGRP release (both sets of processes and mechanisms are CGRP-dependent, to some extent). I evidently didn't edit that posting very carefully, but hopefully it was still relatively clearly-written.

Potential Capacity of Cutaneous UVB to Influence Placental Blood Flow and Maternal-Fetal Immune Privilege via Primary Afferent Neurons

The author of this article [Berkley, 2005: (http://www.ncbi.nlm.nih.gov/pubmed/16139851)] wrote this in the abstract (I'm going to look at the full text now):

"Throughout the CNS, neurons responsive to stimulation of the reproductive tract also respond to stimulation of skin and other internal organs, in a manner modifiable by reproductive status and peripheral pathophysiology."

I'm wondering if cutaneous UVB exposure (the exposure might very well not have to be on sites innervated by C-fibers whose central terminals are in the L1 and T13 dorsal horn) might affect placental blood flow (by producing primary afferent depolarization and inducing efferent action potentials in visceral primary afferent neurons) via changes in the activities of neurons, originating in the spinal cord at sites rostral to the L1 and T13 dorsal horn or at supraspinal sites, that exert a descending influence on the central terminals of visceral primary afferent neurons (T13 and L1 dorsal root ganglion neurons). I'm not, in any way, suggesting that anyone go in the sun without talking to one's doctor, and I'm just discussing my more extreme thoughts on these topics. This may well not be true, but I'm just saying it seems like it might be possible. Berkley (2005) discusses some of the evidence that visceral and somatic (such as cutaneous) primary afferent neurons converge on the same neurons in the dorsal horn [or maybe converge, via intervening interneurons or other neurons, on tertiary sites (i.e. neurons) in the dorsal horn (http://scholar.google.com/scholar?hl=en&q=tertiary+convergent+neurons+dorsal+horn)].

This article [Amira et al., 1995: (http://ep.physoc.org/content/80/4/645.full.pdf)(http://www.ncbi.nlm.nih.gov/pubmed/7576603)] is not all that good an example of anything, and I was looking at it before I found that article by Berkley (2005). But the article by Amira et al. (1995) discusses some of the growth of and changes in the sensory innervation of uterine structures during pregnancy in rats.

Researchers have suggested that CGRP may be involved in the regulation of placental blood flow (http://scholar.google.com/scholar?hl=en&q=CGRP+placental+blood+flow), and an increase in the release of CGRP from DRG neurons innervating the placenta might both increase or otherwise regulate (potentially producing adverse effects, although I'm not attempting to make any values-based or subjective statements in this posting) placental blood flow [I think Mowa and Papka, 2004, listed in this search result, appear to have discussed the capacities of visceral primary afferent neurons to conduct efferent action potentials and exert "efferent effects" on their visceral target tissues, as primary afferent neurons are known to do (discussed in past postings of mine): (http://scholar.google.com/scholar?hl=en&q=efferent+primary+afferent+CGRP+placenta)]. Dong et al. (2003) [Dong et al., 2003: (http://molehr.oxfordjournals.org/cgi/content/full/9/8/481)(http://www.ncbi.nlm.nih.gov/pubmed/12837925?dopt=Abstract)] discussed the fact that CGRP had been found to reduce placental vascular resistance (i.e. produce vasodilation) in in vitro experiments and that CGRP is a very potent vasodilator. They discuss the fact that plasma CGRP levels even increase during pregnancy. UVB-mediated increases in CGRP release in the placenta could regulate the IgG subclasses (i.e. IgG1, IgG2, and IgG3 isotypes, etc.) that cross the placenta [with reference to the issue of the fetus as constituting, for all practical purposes, an "allograft," as discussed by Niederkorn and Wang (2005) (discussed below)] [Niederkorn and Wang, 2005: (http://www.ncbi.nlm.nih.gov/pubmed/16314777)] or even regulate Th1/Th2 immunological development in the fetus. I'm just suggesting these possibilities and am not suggesting anything beyond that. I link to a search (http://scholar.google.com/scholar?hl=en&q=CGRP+placenta+immune) on the potential role of CGRP in fetal immune privilege (or placenta-associated immune privilege or whatever one wants to call it, given the variety of terms used to describe it) below [or, actually, these searches are better: (http://scholar.google.com/scholar?hl=en&q=fetal+%22immune+privilege%22); (http://scholar.google.com/scholar?hl=en&q=fetal+%22immune+privilege%22+CGRP)]. On the other hand, there's a reference in that search (Parida et al., 1996) suggesting that elevations in CGRP might produce adverse effects. But Parida et al. (1996) (http://journals.lww.com/pedresearch/pages/articleviewer.aspx?year=1996&issue=04001&article=00399&type=fulltext) looked at cord blood concentrations of CGRP, and elevations in blood CGRP are not likely to be produced by the same factors or conditions that may produce localized increases in CGRP release in the placenta (i.e. UVB). For example, Parida et al. (1996) note that elevations in CGRP can occur during sepsis. That's obviously a very different set of circumstances than anything I've been talking about, and one can't conclude much, in my opinion, about localized changes in peptidergic transmission by looking at changes in blood levels of CGRP or other neuropeptides. But there is the potential for some sort of deficiency in the normal adaptations that might be expected to occur (the adaptive changes in axonal transport or refinement of sensory transmission associated with the occurrence of "more" dorsal root reflexes) in response to the supposed UVB-mediated increases in spontaneous, efferent activities of primary afferent neurons innervating the placenta. In any case, CGRP could conceivably fulfill a "similar" role to that of estriol, a hormone that increases during pregnancy. Estriol has been researched in the treatment of multiple sclerosis (http://scholar.google.com/scholar?hl=en&q=estriol+%22multiple+sclerosis%22+pregnancy), because autoimmune or inflammatory symptoms can sometimes become less severe during pregnancy (potentially in an estriol-dependent manner). Those chang

The article by Niederkorn and Wang (2005), in this search result (http://scholar.google.com/scholar?hl=en&q=CGRP+placenta+immune), makes comparisons between ocular (eye-associated) immune privilege [anterior chamber-associated immune deviation (ACAID)]. I'm assuming they discuss CGRP to a significant extent, because ACAID is known to be dependent on CGRP, as discussed in past postings. In any case, these mechanisms might be relevant to the etiologies of disease states that have been associated with low levels of UVB exposure (multiple sclerosis and schizophrenia) [(http://scholar.google.com/scholar?hl=en&q=UVB+schizophrenia); (http://scholar.google.com/scholar?hl=en&q=UVB+%22multiple+sclerosis%22); (http://scholar.google.com/scholar?hl=en&q=CGRP+%22multiple+sclerosis%22)]. As discussed in past postings, vitamin D analogs have been shown to upregulate CGRP mRNA in dorsal root ganglion neurons, in an NGF-dependent manner [Riaz et al., 1999: (http://scholar.google.com/scholar?hl=en&q=CGRP+%22vitamin+D%22+dorsal+root)]. Vitamin D deficiency has also been linked to the development of peripheral neuropathy, in some cases [it looks like Carlson and Kenny, 2007, may have discussed that possibility: (http://scholar.google.com/scholar?cites=4360282027492426919&hl=en)].

That's not a good thing to see, in my view. I didn't realize there was that research on maternal-fetal immune privilege. Hopefully these suggestions aren't valid, but, when I decided to explore that potential mechanism that flowed out of the UVB research by Gillardon and colleagues and by other researchers, I didn't know there was all of that research (on the actions of CGRP, outside of the context of UVB, at the maternal-fetal interface). I guess there's maybe not a massive amount of research, but yeah...

Friday, July 10, 2009

Regulation of mTOR Signalling by Branched-Chain Amino Acids: Relevance to Research on HMB and Energy Metabolism and Neurodegenerative Diseases

This article [Nobukuni et al., 2005: (http://www.pnas.org/content/102/40/14238.full)(http://www.ncbi.nlm.nih.gov/pubmed/16176982?dopt=Abstract)] discusses some mechanisms whereby branched-chain amino acids (BCAA's) (or, by extension, the metabolites of BCAA's, such as 3-hydroxyisovalerate/HMB) can activate mTOR-dependent signalling. It sounds like they're saying that the repletion of cells with free amino acids augments mTOR-dependent signalling through a rapamycin-resistant pathway that's similar to or overlaps with an insulin-stimulated, rapamycin-resistant signalling pathway. The only other thing I gleaned from a quick glance at the article is that changes in aminoacyl-tRNA levels or in the acetylation of aminoacyl-tRNA's, such as histidyl-tRNA, do not appear to mediate the BCAA-dependent increases in mTOR activity. Rather, the authors argued that the BCAA-dependent changes in mTOR activity had been dependent on the levels of intracellular, free amino acids.

This article is interesting and has been cited 172 times, but I guess I'd like to know how any one of the changes in protein-protein interactions might interact with energy metabolism, overall, in the cell. I know that's a tall order, but the importance of all of those protein-protein interactions might be rapidly nullified by changes in the availabilities of energy substrates, etc. In my opinion, one problem with research in molecular biology and even with research in cultured cells, in general, as opposed to research that gives some attention to metabolism, is the implicit assumption that cells, in vivo, will be supplied with the kinds of perfectly-abundant concentrations of energy substrates that cells in culture are provided with. I think this is really important and bears on research in stem cell biology, for example. There's a lot of potential in that area, when one looks into the future, but cells in the brain don't provide any indication, such as might be visible on an MRI or that might cause symptoms in a person, that they are dying in large numbers. It's only after the cells have been dying for 20 or 30 years and have become dramatically decreased in numbers that a person might, for example, develop obvious symptoms of Parkinsonism. That occurs normally during aging, but the assumption should be, in my opinion, that cells are unlikely to be able to meet their energy requirements in vivo. Varying the supplies of different substrates in a cell culture medium would require a lot of effort, and the extra effort probably couldn't be justified if researchers were already investigating mTOR signalling or other intracellular signalling pathways. But I think the development of some of these prodrug delivery methods, for delivering physiological compounds, should be given a higher priority than people are giving it. In the face of hypoxia or glucose deprivation, giving attention to protein-protein interactions can be like shuffling cards or dealing all the players a new hand of cards while all the players are suffocating, as the the oxygen is being sucked out of the room.