Britain had the beginnings of its national grid system in 1937, when a group of engineers connected a series of smaller electrical regional grids, in an effort to increase supply security and reduce overall electrical cost. This was to form the basis of the national grid, which we have relied on ever since to provide us with electricity as and when we need it.
However this National Grid was created when energy was relatively inexpensive to generate. This meant that reliability was ensured through the production of excess capacity.
The limitations of the current national grid
As we have mentioned previously, the current grid has its limitations. First of all, it is an ageing infrastructure that is creaking and straining under the weight of the current electrical needs of the country. In the sections below we are going to examine some of the issues that the national grid currently faces:
Electricity supply and demand
Previously, as demand increased, so did capacity – simply, a new power plant was installed. Over time though, the cost of installing new capacity has risen dramatically, as has the cost of the fuel used to power it. Nowadays, more and more of our daily activities rely on electricity. This has led, in spite of improved energy efficiency in many appliances, to a sharp rise in the amount of electricity we consume, pushing up our peak demand to unprecedented levels.
This has put the current electrical grid in an interesting position. Energy demand has increased over time; however new capacity has not been installed at the same rate, so the amount of headroom (the difference between peak supply and peak demand) has been dramatically reduced. This has resulted in the need to fire up older, highly inefficient power stations just to meet current demand. Unless new plans are put into place, things will only get even stickier in the years to come.
The active process of getting electricity from where it is generated to where it is needed is actually a fairly simple process. However, as the demand for electricity has increased, the Grid has been forced to handle huge amounts of electricity that has to be transmitted great distances from its source to where it is required. This is a highly inefficient process, with large amounts of electricity being lost due to lengthy supply lines and basic transmission intelligence.
Increase in renewables
The UK used to rely on a centralised core of fossil fuel and nuclear power plants running up and down the spine of the country to provide itself with power. However, as these power plants have aged, many have closed down, and a new EU carbon reduction directive has meant that many more are due to close in the near future.
The Grid has had to replace this lost capacity by installing new power plants. Since the turn of the century, this new capacity has largely been in the form of combined-cycle gas turbines and renewables.
The major issues with gas is that we need to import it and although it is cleaner than coal, it still produces harmful emissions when burnt. The major issue with renewable energy is that it is intermittent; if the wind isn’t blowing, no power is produced from wind turbines. This makes integrating renewables into an ageing and inflexible grid much more difficult, since energy storage will have to be bought into play. This further complicates the energy picture in the UK.
Reliance on imported fuels
As previously mentioned, many of the UK’s ageing fossil-fuelled power plants are shutting down, however new combined cycle gas turbine plants are still popping up. One of the fundamental issues that we face is energy security. As things stand, the UK is incredibly reliant on gas, especially when it comes to heating homes. This was partly the result of North Sea gas, which we assumed would never run out! The problem is that unfortunately it has; so we import the majority of our gas from Qatar and Norway.
So ultimately, the ability to heat our homes does not sit with the UK energy companies; instead we rely on the Middle East where much of our gas is sourced from – one of the most politically volatile places on earth. We have already seen massive price fluctuations, and it’s pretty worrying to be absolutely at the mercy of these countries.
Centralised energy production
The centralised method with which Britain powered the National Grid is fast becoming outdated. Previously, hundreds of fossil fuel power plants would stretch up and down the centre of the country, supplying the nation with electricity. However, with the increase in solar panels and wind turbines comes the massive increase in micro generation. This is decentralising a grid designed to run via centralised means.
Increased cost of production
Not only is demand increasing, but also generation is becoming progressively expensive to expand. You may have read the recent nuclear power plant go-ahead and wondered why energy production has become quite such an expensive business. The simple fact of the matter is, that while the current grid system remains, prices of generation will continue to rise and these increases will be passed onto the consumers. Nuclear power is pricey, importing fossil fuels is expensive and renewable energy can not yet be relied upon.
So what can we do?
Obviously, peak supply falling below peak demand would cause serious issues, the concept of rolling blackouts has fortunately not been something most of us have come across in our lifetime. However these are a real possibility in the years to come unless we act now – so what exactly can we do?
We could use less electricity – energy conservation
We could install more electricity generating capacity – energy generation
We could be wiser in our electricity usage, try to dampen peak demand and produce electricity closer to where it is needed – Smart Grid
Obviously the best thing to do here is to use less electricity and energy, by generally being more energy efficient. This means that, without placing constraints on what you can do, you use less energy in everyday tasks.
Increasing the electricity generating capacity is probably the most expensive of the options. This is highlighted by the US Government having calculated that it costs about 3x as much to roll out new capacity, compared to reducing demand through energy efficiency.
The final option available is to be wiser when using energy– now this doesn’t necessarily mean energy efficiency. Instead it is looking at ways to remove the peaks in our energy demand to allow a lower installed electricity generating capacity to meet our energy requirements.
How can we use energy more effectively on a national scale? By creating a ‘smart grid’.
A smart grid relies on data – not only about how much electricity is being produced, but also about demand at a very granular level.
In order to improve stability and restore order to the national grid system, the smart grid is developing out of a number of technological improvements and controls, which should be fully functioning by 2025.
This smart grid simply means the introduction of high tech computers and 2-way communications into the national grid system in the hope that it will help the UK respond to the energy demands of the 21st century, while also work with the planned reduction in carbon emissions.
Historically, the UK has been powered by fossil fuels: for example coal, gas and oil. However more recently, we have seen the introduction of renewable energy into the energy mix. This has put increased pressure on the national grid, since now instead of having a few large power plants, essentially whenever someone gets solar PV installed on their homes, we are having to integrate this extra generating capacity into the grid. We suddenly have millions of mini power plants, and unfortunately the national grid was not meant to work in this way.
The introduction of ‘Smart’
The smart grid, and the smart meter, will allow the consumer to manage electrical energy usage more than ever before, creating a more efficient, more reliable and lower carbon energy industry for all.
Smart grids will focus on:
Renewable energy distribution
Demand-side management and time shifting the demand
Real-time monitoring of grid performance
These three points will allow for:
Reduced peaks in power usage achieved by handing power stations the prerogative of controlling our smart appliances
Reduced electrical wastage by delivering instant information on consumption
The increase the number of ‘smart appliances’ to reduce electrical usage
Early warnings for blackouts and rapid recovery systems
One of the main improvements that the smart grid will have over the current national grid system is the introduction of a 2-way digital dialog system. This introduces intelligence, automation and control into the electrical grid. It enables not only the transmission of electricity from grid to home, but also the integration of an intelligent communications network.
So unlike today’s National Grid, which just allows electricity to be delivered into the home, the introduction of this 2-way dialog as part of the smart grid enables real-time energy usage data to be sent from the home back to the energy suppliers. This allows instant meter readings so actual usage can be billed without the need to send someone out to read the meter.
This 2-way dialog increases the potential for consumers to interact more with their energy consumption, helping to reduce peak demand and therefore lowering carbon emissions within the grid.
Real time control
Through the use of the digital communications technology that the smart grid uses, real-time control can be implemented. Both the utility companies and consumers are able to see and monitor electrical usage as it occurs. This real-time control increases the reliability, efficiency and speed of the grid. It also allows for Time of use tariffs, which aims to time-shift the demand, resulting in more evenly distributed electrical usage, so new expensive generating capacity doesn’t need to be built since peak demand will actually fall.
Ability to identify issues within the Grid
Currently, once an area loses power, the Grid is only informed once a customer complains about a lack of electricity. This can therefore lead to a long process of repairs, especially if the problem occurs in the middle of the night and no one contacts the utility company until the morning.
With the introduction of the smart grid, when an area is affected by unforeseen circumstances that causes it to lose power, electricity will automatically be redirected via an alternative route ensuring there is no impact on the customer. This self-healing power of the smart grid, enables rapid fixes and allows for a more reliable, efficient electrical grid.
Better integration of renewables in our energy mix
Since 2006 the amount of electricity produced by renewable energy sources has more than doubled. This includes the massive wind farms that are now dotted around the country as well as individual properties that have solar PV panels on their roofs. This has led to a far more complicated energy mix than we have had to deal with before, with intermittency now having a far larger impact on our energy supply than ever before.
The smart grid allows the energy companies to marry real time production with real time demand, if an issue is spotted or the headroom becomes too tight, a gas power plant can be fired up to help bridge the gap. Hopefully in the future the ability to store energy when demand is low will be available and this can be used to help balance supply and demand.
Using electricity closer to home
One important feature that the smart grid will impose is the introduction of the intelligent allocation of electricity. This means that if you were to flick on the kitchen lights, the electricity required would come from the closest possible source. The current grid has many transmission problems that can lead to the loss of electricity due to the distances the electricity has to travel from the few centralised fossil fuel power plants to the home. However with the smart grid, and the introduction and integration of millions of micro power plants such as wind turbines and solar panels, electricity can be used much closer to home; thus reducing the potential losses transmission creates.
Imagine the whole country coming back from work at 6pm and switching their kettle on, plugging in their electric vehicle to charge and turning on the T.V.
The smart grid would use distributed intelligence, which allows it to recognise these daily trends. This would enable the grid to implement miniscule delays upon the delivery of electricity that relieves pressure and reduces the sudden increase in demand.
Data is administered where electricity is consumed, at the end of the smart grid, and can be analysed in order to make real time decisions on the distribution of electricity. This distribution intelligence will reduce the amount of pressure that the centralised grid is put under, while also multiplying the other benefits of the smart grid.
Operations centres and resilience
The active process of getting electricity from the grid to the place where it’s needed used to be fairly simple. The number of appliances in the UK was at such a number that it allowed for a small quantity of cables to easily transport the electricity. However, as the need for electricity increased, more cables were put up until it became a complex web of criss-crossing wires stretching for miles on end.
This often causes damaging oscillations within the network, which leads to blackouts and a potentially inconsistent supply. However, with the smart grid comes the introduction of new operation centres that use real-time information to help efficiently transform the electricity network into an organised system, effectively reducing the chance of blackouts and failures.
The highly technological smart grid will also have the power to resist attacks and natural disasters through the introduction of extremely assured security protocols.
The advantages and disadvantages of the smart grid system
Rolling out the smart grid should help keep the lights on in Britain, reducing the threat of blackouts while hopefully providing value for money for the consumers.
Aside from helping the country transition to a low carbon energy industry, it will also benefit consumers through a variety of means that we look at in detail below. Obviously rolling out the smart grid also raises a few areas of contention for consumers and we also look at these.
Advantages of the smart grid for the customer
Empowering the consumer
The smart grid will give the consumer the potential to save money; however this is very much dependant on the consumer acting on the information available to them. For example, one of the potential benefits of the smart grid is the time of use tariff, which will charge customers less for using electricity at off-peak times and more at peak times (like a much more accurafte Economy 7 tariff).
Obviously, while this has the potential to allow the consumer to change their energy usage habits to ensure they pay less for on their bills. It also means that they may be charged more if they use electricity during peak times. This ties in with real-time electricity consumption.
Real-time electricity consumption
Perhaps the main advantage that the smart grid will offer the consumer is an increased responsibility over their electrical usage. Real-time consumption will display up-to-the-minute information on how much electricity is being used and at what price.
The end of estimated bills
The smart grid allows the energy companies to see energy usage from individual houses in real time if they so require, so they should be able to bill exactly the right amount each month. No more overpaying during the summer and getting whacked with sudden additional payments when they finally get actual energy usage details from the meter.
Electrical reliability/swifter power outage detection
Due to the increase of data available within the smart grid, power outages, or blackouts, can be predicted and sometimes even prevented altogether. The self-healing capabilities of the grid allow problems to quickly be fixed, potentially even before customers are even aware of the issue. This obviously creates a more reliable electricity network that should mean power failures become a thing of the past.
Lower carbon footprint
The smart grid – through initiatives like the time of usage tariff – should help lower peak energy demand, meaning less generating capacity will be needed. This should result in lower carbon emissions since fewer old fossil fuel electricity generators are required to be switched on.
The improved integration of renewables inside the grid also helps minimise carbon emission within the UK’s electrical generation mix.
Remotely monitored usage
The introduction of the smart meter within the smart grid system means that utility companies no longer have to pay the high admin fees involved with checking meter readings. This is because information is passed from the home’s smart meter, along the 2-way dialog communications system and into the smart grid’s database. The removal of these admin fees mean that the savings the companies make may be passed onto the consumer to a certain extent.
At present, the National Grid is not suited to high levels of electricity being produced by renewable energy sources. The reason for this is the complexity in balancing supply and demand, which was hard enough when they could predict when power stations were going to be off and online.
Think about the added complexity that 1 million homes having solar PV have added to our energy mix. We now have millions of micro power stations that are all producing unpredictable amounts of energy.
The introduction of the smart grid allows the increased integration of renewables energy in to our energy mix. This will reduce our dependence on imported fossil fuels, relieving us from these volatile markets. Not only will it save the consumer money, but also lower the carbon footprint of the grid and therefore consumers are safe in the knowledge that the electricity that they use is as green as can be and that it is being produced here in the UK, with no need to import fuels.
Disadvantages for the consumer resulting from the implementation of the smart grid
Cost of installation passed on
The upfront capital cost of installing smart meters nationwide is quite frightening, with reports coming from governmental sources suggesting that it may be close to £12billion. This has ignited public opinion that the money required for this revolutionary smart grid will come from the consumer. However this may not necessarily be the case, as utility companies will pay for the technological improvements and recoup the costs through the improvements that they bring.
Lack of electrical pricing clarity
The smart grid brings with it the potential for utility companies to alter electricity prices through real-time consumption information. One of the key concepts of the smart grid is that it will aim to level off the peaks and troughs in electricity demand. The way that this will be achieved is to alter unit prices through time of use tariffs.
Energy companies will increase the price of electricity during peak times while decreasing it during periods of reduced demand. However, while previously people have been comfortable with how much they are paying, whether through a standard tariff or Economy 7, the smart grid does add complexity to bills.
Loss of smart appliance control
Many people have read about the potential loss of control of smart appliances. This will be in the case of utility companies remotely switching off your smart freezer during periods of peak demand in order to reduce pressure on the grid.
Now, while neither you nor your freezer will be able to tell, for many it represents a step in the wrong direction towards a more regulated and controlled state. However, if switching off your smart appliances remotely means that you are using less electricity during peak times and therefore saving money, then surely this could also be looked at as an advantage.
Smart meter health impacts
Some consumers may be apprehensive over the installations of smart meters outside their homes in light of some media coverage of potential health impacts. The worry is that the smart meter, through their wireless communications with the grid, will emit harmful radiation that could cause cancer. However we can categorically reassure people that there is no information nor scientific research that suggests smart meters can be harmful to human health.
Carbon capture and storage (also known as carbon sequestration) is a method of mitigating the CO2 emitted from the combustion of fossil fuels. It is a way of storing the gas before it is allowed to enter the atmosphere and more recently it also refers to scrubbing CO2 from ambient air, for example using carbon scrubbers or converting it into a useful product via artificial photosynthesis.
Various methods have been suggested for storing CO2, which include storing it in existing geographical formations, deep in the oceans or in the form of mineral carbonates. These will be discussed in more detail below.
This method involves injecting CO2 into underground geological formations including old oil fields, gas fields, saline formations and old coal mines. By injecting CO2 into declining oil fields, this can force more oil out of the well and the cost of storing the gas can be partially offset by the sale of the extra oil recovered. However they do have a limited capacity and locations are also finite. In the case of mines, CO2 attaches itself to the surface of the coal which in turn releases methane which can be sold to offset the cost of storing the gas in the first place. However like pushing out and using the extra oil, burning the methane will produce more CO2. It also is important that the coal field is not over permeable otherwise the gas could leak. However good locations are available in all the geographical locations where the gas could be stored for millions of years.
There are two major methods of ocean storage; the first is pumping the gas down to depths of 1-3km and letting the gas rise, where it will dissolve into the seawater, this will slowly come out of the water back into the atmosphere. The other method is pumping it down further than 3km, and at this depth the pressure will cause the CO2 to liquefy, and as this liquid is denser than the sea water it will fall to the bottom of the sea and form non reactive pools of liquid CO2. Both these methods have potential but the effect of additional quantities of the gas may well upset ecosystems, so more research needs to be completed in this field.
This method involved reacting the CO2 with metal oxides, forming metal carbonates. This reaction requires heat, however the metal oxides needed for the reaction are plentiful (such as Magnesium) and the produced carbonates are very stable so they will hold on to the gas for millennia. The major drawback is the heat required for the reaction, so research is ongoing to make this a more economically viable option.
Carbon Dioxide Scrubbers
What are Carbon Dioxide Scrubbers?
In other sections of this site we have covered the concept of carbon capture and storage, which is the process of removing carbon dioxide from entering the atmosphere and storing it. Most of the technologies we have looked at focus on removing the gas where it is developed in high quantities, for example in the exhaust gases produced by coal power stations.
Carbon dioxide is found in the air everywhere though; so why just concentrate on where the gas is emitted in higher quantities?
It is possible to scrub CO2 from the air anywhere; the technology has been around for decades and used on Submarines and Spacecraft, to name but a couple of examples. So the potential is there to research and build on this technology, scaling it up so it can be positioned to effectively scrub the air in any location.
How does CO2 Scrubbing work?
Despite several different designs currently being in development, they all are based on a common chemical reaction. Air is sucked into the machinery where it is bought into contact with a sorbent material which chemically binds with the Carbon Dioxide. A sorbent material is one that simply absorbs a gas or liquid (e.g. sponge is sorbent as it absorbs many times its own weight in water).
The greater the surface area of the sorbent, the more efficiently it will absorb the gas or liquid, therefore different mechanisms have been suggested to maximise exposure of the sorbent to the carbon dioxide, thereby maximising it’s scrubbing ability.
The Palo Alto Research Centre has proposed to draw the CO2 through a fine mist of liquid sorbent. Housing this technology in towers that are several metres high, the mist would react with the gas and be collected in a chamber where they would once again be separated. The pure CO2 could be compressed into liquid form and removed, while the sorbent would be recycled and used again to collect more of the gas.
Klaus Lackner, has created another proposal to maximise the surface area of the sorbent, and this to apply solid sorbent to thin sheets and allow the Carbon dioxide to react with it. Once the initial reaction has taken place, liquid chemicals are washed over the sheets that create a stronger bond with the CO2 than the sorbent. The liquid can then be collected (as it now is bound to the CO2), and this can be heated which will allow the CO2 to be stripped from the liquid, and so once again the pure CO2 could be compressed into liquid form and removed, while the liquid can be recycled and used again to wash future CO2 from the sheets.
Issues facing the technology
The air-capture machines are electrically powered, and most electricity produced (via non-recyclable methods) has carbon dioxide emissions associated with it. So an important question is whether the carbon dioxide stripped from the air is in excess of the carbon dioxide ‘produced’ to drive the machine. In fact, Klaus Lackner’s prototype uses 100kwh of electricity to remove 1 tonne of CO2 from the air, and this power required equates to 35kg of CO2 being produced as emissions, so the ratio of gas removed far outweighs the amount produced. In fact, if the energy used to drive it is derived from renewable energy forms then the figures become even more attractive.
Another issue facing the technology is that the sorbent material cannot be recycled for ever, and has a finite lifecycle, after which it has to be replaced so this makes sorbent supply high (and expensive), in addition there will be maintenance costs associated with swapping the sorbent material over. The cost currently associated with removing 1 tonne of CO2 from the atmosphere is about £150 per tonne when using these carbon scrubber methods, while the cost of trading a tonne of carbon is about £6-£13 (see carbon trading). Only when the cost of removing a tonne is lower than the trading cost will this become truly commercially viable. Dr Lackner has suggested that he feels that with technical improvements and economies of scale achievable if the products become commercially successful, then the cost will come down to approximately £30 per tonne. In addition trading carbon prices will rise in the future, as countries looking to fulfil their green promises endeavour to make the cost of emitting CO2 unattractive.
Finally, where do we position the carbon scrubbers? In the grand scheme of things, CO2 present in the atmosphere is found in very low percentages, roughly 0.04% (with 99% made up of Nitrogen and oxygen), so actually removing it out of the atmosphere is difficult. Therefore when we produce the compressed gas as a result of the carbon scrubbing technologies, it is important that we have a use for it. CO2 is a useful gas in it’s own right – it can be used to pump into commercial greenhouses to increase plant growth, it can be used to inject into natural gas reserve beds to drive more of the gas out; It can even be transformed into fuel for transport. So we should erect the carbon scrubbers where the CO2 produced can be then used to perform a useful function. In the future, if the technology takes hold and is profitable (the stored gas can be sold for more than it costs to remove it), then we could see the widespread launch of carbon scrubber ‘orchids’, simply acting to remove the gas from the atmosphere, but until that time it is important that we spend time considering the best places and set ups for these technologies.
Biochar, also known as charcoal or Terra Preta, is a carbon-rich product created from the pyrolysis of biomass. Pyrolysis simply refers to the thermochemical decomposition of plant derived organic matter in a low or zero oxygen environment. During this process the biomass is converted to biochar and syngas (which can be captured and used to make heat and power). The lack of oxygen prevents combustion and the hotter the temperature at which the reaction takes place, the quicker pyrolysis takes pace.
The biochar produced varies depending on both the biomass source used and the temperature that pyrolysis takes place. At temperatures of approx 400°C, it will produce more char (the solid part of biochar) while pyrolysis at higher temperatures yields more syngas and a more porous and absorptive biochar, which has greater potential to adsorb toxic substances.
The two main methods of pyrolysis are “fast” pyrolysis and “slow” pyrolysis. Fast pyrolysis yields 60% bio-oil, 20% biochar, and 20% syngas, and can be done in seconds, whereas slow pyrolysis can be optimised to produce substantially more char (~50%), but takes on the order of hours to complete.
Why is biochar such a big deal?
The biochar process makes impressive reading when you see how much and how quickly carbon dioxide is actually released into the atmosphere by plants, animals and humans. Carbon dioxide released by human activities makes up approximately 70% of greenhouse gas emissions. Aside from lowering individual energy use and producing the electricity from greener or renewable sources as mentioned elsewhere on the site, in an effort to actually decrease the amount of Carbon Dioxide in the atmosphere we need to ‘trap’ it. See also Carbon Capture & Storage.
In Photosynthesis, plants and trees use carbon dioxide combined with water and sunlight to produce sugars, thereby locking the carbon within their very matter. Until a plant or tree dies, it will continue to use and store the carbon dioxide, using it as fuel for growth. Upon death, the plant or tree will start to decompose in the air as microbes and fungi begin to break it down. This process releases the carbon dioxide back into the atmosphere within one or two years so it only acts as a carbon sink for a relatively short time.
During pyrolysis, 50% of the carbon locked in the biomass is converted into biochar and the rest is converted to syngas which can be captured and used to produce heat and power. The biochar produced is chemically and biologically more stable than the original carbon form it comes from, and can remain stable in soil for hundreds to thousands of years. Therefore biochar has the potential to play an important role as a long term carbon sink, sequestering the carbon from the atmosphere and partially offsetting greenhouse gas emissions produced by burning fossil fuels.
Biochar use in agriculture
The addition of biochar to agricultural soils can improve crop yield, aiding in retaining nutrients and water, decreasing soil acidity and decreasing the release of non- carbon dioxide green house gases such as methane and nitrous oxide. As mentioned previously, the physical properties of biochar are dependant on the temperature at which it is produced, therefore it can be designed with specific qualities to target the distinct properties of soils. For example, if a more porous biochar is produced and applied to the soil, it will adsorb and lock potential contaminants within its structure; therefore the crops grown will be more containment free.
Is biochar economically viable?
The economic viability of biochar is dependant on a few factors; firstly the cost of the feedstock. If feedstock is available close to the pyrolysis equipment then this makes the process more attractive as the associated transport costs are minimal. In addition, if the feedstock would otherwise require a waste disposal fee, then cost of production of biochar would be further reduced. Another key driver of the economic viability of biochar is related to the returns achievable through the sale of the syngas (resulting from the Pyrolysis process) to energy producers. Biochar can also be used in agriculture, reducing the requirement of fertilizer and also increasing yield. Futhermore the biochar producer or user may benefit from some sort of carbon credit under an emissions trading scheme.
The future of biochar
Biochar, or charcoal, has been produced for hundreds of years and is made regularly on a small scale allowing subsistence farmers to produce small quantities of biochar for their farms of gardens. However, industrial scale production is still in its infancy, with research currently ongoing within the scientific and technological communities focusing on the most effective method of producing it on a large scale.
Biochar offers a very viable solution of carbon sequestration, however there is a long way to go in terms of research and building enough pyrolysis plants until this technology will make a large contribution to mitigating climate change. The UK biochar Research Centre, with its headquarters at the University of Edinburgh, is making good progress in developing this research in this areas and promoting the benefits of biochar.
Biochar is very stable carbon sink, and therefore can become a viable carbon sequestration method.
As part of the pyrolysis process, syngas is produced as a waste product that can be used to produce power.
The biochar can be produced to target specific deficiencies in soils to aid agriculture.
Biochar is easy to produce on a small scale.
Producing biochar on an industrial scale is still at the research level, with the best techniques yet to be established.
Biochar requires a source of feedstock, which unless harvest locally can be expensive to transport.
The systems used to create biochar are based on the feedstock type – it is ambitious to expect a ‘One size fits all’ standard system.
Bio CCS Algal Synthesis
What is CCS algal synthesis?
Bio CCS algal synthesis is a new process in which the carbon cycle that normally takes millions of years can be reproduced on an Algal synthesiser in 24 hours. It is the latest in a long line of potential Carbon Capture and Storage technologies, although it is more focused on the capture of carbon dioxide and turning it into a useful product rather than simply CO2 storage.
MBD Energy Limited and an algal research team from James Cook University have developed a 5000m2 test facility designed to produce 14,000 litres of oil and 25,000kg of algal meal from every 100 tonnes of CO2 consumed.
How does this form of carbon capture work?
The process of algal synthesis involves injecting captured flue gases (the exhaust gased from burning fossil fuels) into a waste water growth medium infused with locally selected strains of micro algae, contained in an enclosed membrane system. The algae photosynthesise with sunlight, using the CO2 as fuel for growth, doubling its mass every 24 hours causing the waste water to thicken as the growth takes place. The algae is harvested daily and crushed to produce algae oil, algae meal and clean water (35% oil and 65% algae meal). In this manner, the algae capture CO2 that would otherwise find its way directly into the atmosphere, thereby offsetting one of the major greenhouse gases.
What can we use the resulting by-products for?
Biofuels – The algae oil produced as a by-product of carbon CCS method is ideally suited to biofuel production. In addition, the glycerine, which is a secondary by-product of this biofuel production process can be used in such areas as pharmaceuticals, cosmetics and food production. The algae oil also can be used to make plastics.
Feed for livestock – The algae meal that results from CCS algal synthesiser can be used as feed for livestock, The algae meal has an advantage over conventional livestock feed in that there is a lot less cellulose in it, this is because algae is supported in water, where as plants need to support themselves in air so need the cellulose for rigidity. The breakdown of cellulose by the ruminant digestive system causes the release of methane, so low cellulose feed has the benefit of not producing as much methane as a by-product of this reaction.
Biomass – the algae meal can also be used as biomass for fertilizer, bio plastic production and energy production.
MBD Ltd are currently in the process of moving from the test facility to full scale display plants, at a number of Australia’s major coal burning power stations. These ‘proof of concept’ projects that commenced last year take the greenhouse gases from the power stations flue chimneys and produce the oil and the algae meal out of this process.
Compelling sustainable solution to 3 significant world issues: Oil, food and CO2.
Bio CCS removes CO2 from the air, therefore mitigating one of the gases that is thought to be responsible to Global Warming.
The Carbon Cycle process, that normally takes millions of years, can take 24 hours.
Commercially harvesting the technological process is still in its infancy – more trials are needed to prove the technology has a worthwhile payback period.
What is photosynthesis?
Photosynthesis is the process by which green plants use sunlight to synthesise foods from carbon dioxide and water. The process combines 6 molecules of carbon dioxide and 6 molecules of water to produce one molecule of glucose and 6 oxygen molecules. The glucose is stored in the plant as starch and cellulose which are simply long chain glucose molecules (known as polysaccharides) as a source of food for the plant to survive and grow. The oxygen that is produced as a byproduct of photosynthesis is what most animals rely on to breath, so the process plants and trees fulfil on our behalf is critical to our survival.
We already can harness light energy from the sun to produce electricity via solar photovoltaic cells, however there is a fundamental issue with electricity that we are currently facing and that is that we have no suitable way of storing the electricity produced (batteries are limited), so we have to use the electricity as it is produced otherwise we essentially lose it.
The beauty of photosynthesis is that it locks energy from the sun within the chemical bonds in the glucose molecule. Therefore plants are not only producing energy, but they also have the ability to store it.
If we could somehow artificially replicate the photosynthetic process completed by plants, we would be able to lower carbon dioxide concentrations in the atmosphere, while also producing sugar that we could use for food and energy production. The ultimate goal though is to take the natural process of photosynthesis and improve it, making it more efficient, absorbing more light, at a wider range of wavelengths, potentially even in the dark to produce more energy.
There are three major scientific challenges in artificial photosynthesis that we need to find answers to before we can create fuel directly from sunlight on an economical scale:
1. Light capture and moving the electrons to the reaction centres
2. Splitting water into Hydrogen and Oxygen
3. Reducing Carbon Dioxide
Overcoming the challenges in artificial photosynthesis
In plants, light capture is handled by leaves which contain the green pigment chlorophyll. This pigment (along with accessory chlorophyll pigments) absorbs all photons with a wavelength of ~430-700nm, and through a complex process first splits water into its constituent parts, then combines hydrogen with carbon dioxideto make the sugars. In artificial photosynthesis we are proposing to use nanoparticles to not only replicate this process but improve its efficiency.
If we use light capturing titanium dioxide nanoparticles on any surface it dramatically increases the surface area and therefore the light capturing potential of the surface. If this titanium dioxide is coupled with a dye and then immersed in an electrolyte solution with a platinum cathode, electrons are excited to the extent they are displaced and produce a current.
This current can then be used to split the water into its molecular components, thereby storing the solar energy in chemical bonds, particularly in the reduced form of hydrogen, again in the presence on nanoparticles, more specifically iridium oxide nanoparticles.
In the final part of natural photosynthesis (known to biologists as the Dark Reactions), carbon dioxide is captured by the chemical ribulose biphosphate, before undergoing the Calvin Cycle, eventually producing one molecule of glucose.
Scientists are currently trying to establish the most efficient form of naturally occurring ribulose biphosphate, with a view to making a wholly artificial nanotechnology-based version that is more efficient than it’s naturally occurring relative.
Artificial photosynthesis research today
There are many barriers to actually recreating the natural process of photosynthesis that we need to overcome, so we are still very much in the early research and development phase on making artificial photosynthesis as a viable energy source.
Research is going on all over the world currently trying to crack what could one day become the renewable fuel of the future, not only using the sun’s energy more effectively than Solar PV currently does (and leaves themselves), but also helping to remove some of the carbon dioxide that humans have added to the atmosphere through burning fossil fuels.
Thomas Fuance, a professor at the Australian National University, feels that artificial photosynthesis could one day be the game changer, providing cheap fuel for everyone in the world, regardless of location, and is pushing for a worldwide collaborative approach to research this area, similar to ITER, or the Human Genome project, sharing data and thereby concentrating on the most promising emerging technological solutions.
In August 2011 he coordinated the first international conference dedicated to the created of Global Artificial Photosynthesis Project at Lord Howe island.
Other approaches being used to mimic photosynthesis
Nanotechnology is not the only avenue we are using trying to mimic photosynthesis. Another method involves using giant parabolic mirrors direct and concentrate sunlight onto two chambers separated by a ring of cerium oxide. The energy from the sun heats this cerium oxide up to 15000c, which in turn releases an oxygen atom into one of the chambers and is pumped away. The deoxidised cerium is then moved into the other chamber, where carbon dioxide is pumped in, and the deoxidised cerium steals one of the oxygen molecules, creating carbon monoxide, and the more stable cerium oxide, which can be reused in the reaction. A similar reaction is used to separate water into its native elements, a hydrogen molecule and oxygen.
Finally, a process first carried out in the 1920s, named after its inventors Franz Fischer and Hans Tropsch, can be completed. The Fischer-Tropsch process involves reacting the hydrogen molecule with the carbon monoxide using a transition metal catalyst such as cobalt to produce a hydrocarbon that can then be used as a fuel.
Research experiments for the process outlined above have been created in the lab, however these reactions are currently unsustainable.
Artificial photosynthesis conclusion
As mentioned elsewhere on TheGreenAge, photovoltaic panels and solar heat collectors are two of our most popular mechanisms for taking advantage of solar energy, however these are currently both inefficient. As humans, we have really struggled to replicate nature’s photosynthetic process, where a plant transfers simple molecules into others with richer energy content, which is probably the most effective way to storing solar energy. Therefore, if either of the techniques above can be mastered, then we are going some way to replicate one of natures best kept secrets.
We are going someway to mimic natures most effective process for creating energy rich products from simple input materials.
Producing a new fuel that can power vehicles from naturally occurring input materials, CO2, water and Sunlight.
It makes Carbon storage more economically viable as the CO2 can be used to create a saleable product.
If we are able to tap into existing big producers of CO2, such as power station exhaust we are able to use the CO2 twice before it enters the atmosphere.
So far the reactions are inefficient / unsustainable.
The reaction needs the heat generated from the sunlight to power it, so is not suitable for operation worldwide.
Liquid Air Storage
What is Liquid Air Energy Storage?
Liquid Air Energy Storage (LAES) is a form of storing excess energy just as CAES (Compressed Air Energy Storage) or other battery storage systems. The system is based on separating carbon dioxide and water vapour from the air to produce a higher concentration of nitrogen. This nitrogen can then be liquefied for storage and expanded back to a gas when we need to make electricity. Liquid nitrogen is a commonly used substance in current industrial processes and can be stored and transported in large volumes at atmospheric pressure. This is different to the CAES process, which requires high-pressure storage chambers. Liquid nitrogen is also a good substance because it has a high expansion rate. In fact, it expands 700 times in volume when turning back into a gas from a liquid.
The LAES form of storage works well with intermittent power sources like wind and solar power, because the additional electricity created can be used to help filter the air into nitrogen, and then turn it into a liquid form. The electricity is recreated when the liquid is expanded to turn into its gas form.
How does the Liquid Air Storage process actually work?
The process depends on using liquefied air or liquid nitrogen (78% of air), which can be stored in large volumes at atmospheric pressure. The air is taken through an inlet and then into a compressor. On entering the compressor air is made up of a percentage of oxygen, water vapour, carbon dioxide and mostly nitrogen. When too much electricity is generated, the excess electricity is used to power the compressor and the chilling unit. Here the nitrogen component is separated and further cooled so it forms into a liquid, at precisely -196°C. The liquid nitrogen is then stored in a chilled vacuum until it is needed to drive power recovery. If there is a peak in demand or if the grid is struggling to meet demand, the liquid nitrogen can help with this. Then first step is to transfer the liquid into an ambient temperature chamber, usually into a heat exchanger. Since liquid air boils at -196°C, any small change to ambient temperature will superheat it. This creates expansion as the liquid once again becomes a gas. The expanding steam then drives a turbine (cryogenic engine), which drives the standard electric generator. Upon creating the electricity, it can then be reconnected and integrated back into the grid.
Can Liquid Air Storage energy be a commercially viable solution?
According to the Institute for Mechanical Engineers (IMechE), the process is anywhere between 25% and 70% efficient. This means that up to 70% of the electricity can be recreated from the cryogenic air compression process explained above. The process becomes more efficient if the cryogenic chambers, air vacuums and generator are located near a factory or an existing power station. This is because excess heat is usually vented, but it can be used efficiently in the LAES process. However despite a possible 70% efficiency, it is worth noting that batteries have an efficiency of 80%. Therefore, to be commercially viable, a cryogenic LAES plant needs to be able to achieve even more efficiency. Ideally the cryogenic LAES plant should also be located close to an intermittent renewable source of energy such as solar PV, wind power or marine energy, so that it can minimise the electricity and heat loss during the transmission process.
How can Liquid Air Energy Storage combat volatilities in electricity demand?
The main challenge we all have is how to deal with erratic demand for electricity and somewhat intermittent supply from non-traditional sources like wind, solar and marine energy. One of the answers could be to use a solution like LEAS to store energy when it is plentiful and then use the storage to convert it back to electricity to meet demand. Therefore current power grids need to be upgraded so they are able to work smarter in order that energy can be provided when it is needed. Many argue that wind, solar and marine energy generation are tied to local environmental conditions, and there is no point transporting this energy around the country via the grid because much of it is lost through the transformation process. Therefore having LAES solutions sitting alongside more responsive local grids is something that should also be considered rather than just applying storage to an already inefficient national grid.
How does Liquid Air Storage compare to Compressed Air Energy Storage?
The energy density of cryogenically frozen fluids, such as liquid nitrogen compares very well against current alternatives such as compressed air, mainly because it can be used and transported under atmospheric pressure. Compressed air needs to be stored in special pressurised tanks, which are liable to leaks. Also compressed air requires special storage space underground, like mining shafts, which are limited in availability.
Liquid Air Energy Storage and the UK
Cooling air using a cryogenic process to create liquid nitrogen is not new, but the current efficiency levels are low because of the energy required to produce the storage fuel, and then again to create electricity. Highview Power in Buckinghamshire operate a heat recover and energy storage system with this process at 25% efficiency. They could increase this efficiency to 70% if they were located next to a conventional power station where excess heat could be recycled. The DECC is currently looking at additional measures the boost research and development in this area, and it has recently announced that it is soon to launch a scheme to get more companies involved. This will be in addition to the grants available through OFGEM.
What is chemical energy storage?
An example of chemical energy storage is the common battery. By using the liquid inside it to store electricity it can then release it as required. Large batteries can act as chemical energy storage for industry and could make future energy generation solutions more efficient and profitable. This will be achieved by storing energy generated when demand on the grid is low and releasing this as required to help meet peak demand.
How does the battery chemical energy storage process actually work?
Batteries are portable devices that can be used in many different areas. The way a battery works is very simple and based on three components:
The anode (the negative pole),
The cathode (the positive pole),
The electrolyte (the liquid chemical that produces the flow of energy).
The anode and the cathode are also known as ‘terminals’ and are made of a metal, which are then separated by the electrolyte.
Converting stored energy to electricity
If you take a device like a light bulb or a simple electrical circuit and connect it to the battery terminals, the chemical on the anode causes a release of electrons to the negative pole and ions in the electrolyte. This is the chemical oxidation reaction.
On the positive pole the cathode accepts the flow of electrons, which completes the circuit for the flow of electrons. These two reactions happen simultaneously: the ions transport current through the electrolyte while the electrons flow in the external circuit. This then generates the electric current.
Storing electrical energy in a chemical store
The process for battery energy storage works in reverse, transforming electrical energy into chemical energy. When excess electricity is produced in the grid, it can be channelled into a battery system, and then be stored in the chemical system.
The mobile phone and electric car both take advantage of a rechargeable battery system. The hope is that in the future this process could be ‘up-scaled’: when electricity is produced by intermittent renewable sources like wind or a solar PV it can be stored in big industrial sized battery systems.
What types of batteries can be used for mass energy storage?
There are a number of different battery solutions that are currently being used in industry and under consideration for mass scale national grid use. This section briefly considers each type for these ambitious future requirements.
Lithium-ion batteries are the fastest growing battery type in the consumer market today. They have many uses, including powering laptops, mobile phones and hybrid vehicles due to the high amount of energy they can store. They also have high energy-efficiency, operate well under a wide range of temperatures, can be recycled and also have a low level of self-discharge.
However to be used as a grid storage solution this type of battery will require some refinement. They will need to operate with improved lifespan (number of charging and discharging cycles that can be achieved) and improved safety. Most importantly the cost to produce the lithium-ion batteries needs to come down – a storage solution these days needs to be cost effective.
Lithium-ion polymer batteries
Like the lithium-ion battery, the lithium-ion polymer batteries not only have a high-energy output, but also have a good safety record and a longer life span. However these are also uneconomical to produce, so the production costs would need to come down to make these viable as the mass-produced storage solution.
Lead-acid batteries can be designed to power large applications and are relatively cheap, safe, and reliable. They are already being used in large storage and uninterrupted power supply solutions (e.g. emergency lighting and powering back-up generators), which means they can be increased further in size to power grids. They can also be easily recycled and an infrastructure around this process already exists.
The problem they have is that they are rather large, heavy and immobile. They also have poor cold temperature performance and a short life cycle.
There are several characteristics of a flow battery system that will enable them to provide very high power and very high capacity on a grid type system. For example, unlike a conventional battery system, the energy output is independent of the energy storage capacity. While output depends on the fuel cell stack, the energy storage depends on the size of the electrolyte tanks and these are independent from one another. This operating capability is very useful when large current flows need to be transported to a national electricity grid system.
Energy output ratio to weight can be up to three times better than lead-acid batteries, but they do have lower energy efficiency.
At the moment, there are only experimental flow battery schemes in operation and, since they haven’t been around as long as the lithium ion battery, it is taking longer for electricity distribution industries to adopt them.
A sodium–sulphur battery is a type of molten-salt battery constructed from liquid sodium and sulphur. The sodium-sulphur battery has a very high energy and power density as a result of sodium being a highly reactive alkali metal. This type of battery has a high energy density, high efficiency of charge/discharge (89–92%) and long cycle life, and is fabricated from inexpensive materials.
However, they operate at a temperature of about 300-3500C, and therefore they require energy to keep them operational. And, due to the highly corrosive nature of sodium polysulphides, such cells must be kept stationary. Therefore they are ideal for for energy arbitrage, which is when the grid system fluctuates between peak demand and supply, so the battery can help manage the load.
Could chemical energy storage be a commercially viable solution?
We are still far from producing batteries that are a viable and cost-effective solution to managing the variation in grid systems. It would be incredibly expensive to make batteries capable of storing excess energy on the grid. So if this energy storage solution was implemented it may significantly increase the cost of electricity to consumers, which would be highly unpopular in the current economic climate.
People use electricity around the clock, so electricity transmission and storage must be kept in mind as key players. Smart grids will require advanced utility-scale batteries to store electricity so it can be delivered when needed.
How does chemical energy storage compare to other storage technologies?
In the UK, the DECC has been running a programme funded by public money that looks at various energy-storage solutions that could be used in the national grid. At the moment there is no obvious solution as everything from compressed air storage to molten salt and battery power is being considered.
The UK does have pumped storage (hydroelectric), but not on a scale seen in countries like Norway and Canada, which make use of their natural topography.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.