Lemon

 Lemon belongs to the Rutaceae family, the orange subfamily and the citrus genus.

In addition to lemon, this genus includes mandarin, orange, citron, brigaradia, grapefruit, etc.

 According to the established classification, all these fruits are called citrus. 

All citrus fruits, including lemon,are evergreens.


In the cold season, they do not shed leaves: there is no outflow of nutrients in them,

as in other trees, but their constant accumulation takes place. Thus, the leaves are like

it would be a storage of important elements for the plant, which are spent exclusively on the growth of new leaves, shoots and branches, as well as on fruiting. It is very easy to distinguish a healthy tree from a sick one: a healthy lemon tree has an abundance of green healthy leaves that are actively involved in the physiological processes of growth.

Lemon leaf lives, as a rule, from two to three years. The leaves crumble gradually as they age.

If a sharp fall of foliage is detected, this indicates that the growth mechanism is disrupted and the plant

needs fertilizing with fertilizers. If a tree loses leaves, it negatively affects its fruiting.

The root system of citrus fruits has one interesting feature that should be given a little attention.

The roots of most plants are a web of root hairs through which they draw water and nutrients from the ground.

Citrus fruits, including lemon, do not have such. They are replaced by special soil fungi located on the roots of the tree

in the form of thickenings, called mycorrhiza. The relationship of fungi and wood is based on symbiosis:

the fungus receives nutrients from the tree, and itself, in turn, provides the plant with the consumption

of everything necessary for growth. The capriciousness of the symbiotic fungus is largely determined by the behavior of the tree itself.

The fact is that mycorrhiza is quite sensitive to temperature and other factors.

For example, it does not tolerate the lack of moisture and lack of air when the earth is too dense.

At temperatures above 50 and below 7 °With she dies. Flower buds are formed mainly in spring.

Buds develop from the moment of appearance for a month and only then bloom. Flowering lasts for several days,

during which pollination takes place. A few days after the petals fall, the rudiments of the fetus are formed.

Often, at the first fruiting, many ovaries are formed, but since the young tree cannot yet properly support them,

many ovaries crumble before reaching maturity.

Lemons are perhaps the most capricious of citrus fruits: temperatures below 7 ° C are fatal for them, and when negative

temperature causes various disturbances of metabolic processes. Thus, lemon he is very whimsical, but this quality is more than compensated by the valuable substances that he possesses.


Distribution

Traditionally, the tropical and subtropical regions of Southeast Asia and India are considered the homeland of citrus fruits.

Lemon is no exception. In these territories, nature has created ideal conditions for its life:

the combination of a suitable amount of light, heat and moisture allows the tree to bear fruit all year round - flowering is observed 2-3 times a year.

As you can see, the conditions in which the lemon culture originated are ideal, but this does not mean at all'

that the area of this citrus has not spread to other parts of the world.

Citrus fruits in general have been cultivated in Asia for more than one century, even more than one millennium.

For example, the Chinese did not limit themselves to just simple cultivation of crops and proved themselves as skilled breeders.

Back in the first centuries BC, new lemon varieties were bred there, which came to Europe many centuries later.

On the territory of Russia, lemon can be grown in southern regions with a subtropical climate, for example, on the Black Sea coast of the Caucasus.

But even in such a mild climate as the Black Sea, it is often necessary to use greenhouses, since in the winter season quite

severe frosts and large amounts of precipitation in the form of snow are not uncommon.

To increase the frost resistance of lemon, breeders breed new varieties with increased resistance to low temperatures.

What is a file system

 Have you ever needed to format a new hard drive or USB drive and been given options to choose from abbreviations such as FAT, FAT32 or NTFS? Or did you once try to connect an external device, but your operating system could not detect it? Are you sometimes frustrated by how long it takes your operating system to search for a file?

If you have encountered any of the above or just clicked the mouse to find a file or application on your computer, it means that you have learned from your own experience what a file system is.

Many people may not use an explicit methodology for organizing their personal files on a PC (article what is a file_system.docx). However, the abstract concept of organizing files and directories for any device with permanent memory should be very systematic when reading, writing, copying, deleting and interacting with data. This task of the operating system is usually assigned to the file system.


Persistent Data: Files and directories

Modern operating systems are becoming more and more complex and need to manage various hardware resources, process planning, memory virtualization and many other tasks. When it comes to data, many hardware tools, such as cache and RAM, have been designed to speed up access times and ensure that frequently used data is “near” the processor. However, after turning off the computer, only the information stored on permanent devices, such as hard drives (HDD) or solid-state drives (SSD), will remain. Thus, the OS should take special care of these devices and the data inside, since this is where users will store the data they need.

The two most important abstractions developed over time for storage are file and directory. A file is a linear array of bytes, each of which you can read or write. While in the user space we can come up with smart names for our files, there are usually numeric identifiers under the hood to track file names. Historically, this basic data structure often refers to its inode index descriptor. Interestingly, the OS itself doesn't know much about the internal structure of the file (i.e. whether it is an image, video or text file); In fact, all he needs to know is how to write bytes to a file for permanent storage and make sure he can retrieve them later when called.

The second main abstraction is the catalog. Actually, under the hood, a directory is just a file, but it contains a very specific set of data: a list of human-readable names mapped to low-level names. In practice, this means that it contains a list of other directories or files that together can form a directory tree in which all files and directories are stored.

Such an organization is quite expressive and scalable. All you need is a pointer to the root of the directory tree (this will be the first inode index descriptor in the system), and from there you will be able to access any other files on this disk partition. This system also allows you to create files with the same names if they do not have the same path (i.e. they are located in different places in the file system tree).

Technically, you can name the file as you like, but it is usually customary to designate the file type with a dot separation (for example, .jpg in picture.jpg), although not necessarily. Some operating systems, such as Windows, strongly recommend using these conventions to open files in the appropriate application, but the contents of the file itself do not depend on its extension. The extension is just a hint for the operating system on how to interpret the bytes contained in the file.


Once you have the files and directories, you should be able to work with them. In the context of a file system, this means the ability to read and write data, manage files (delete, move, copy, etc.) and manage file permissions (who can perform all of the above operations?). How are modern file systems implemented that allow performing all these operations in a fast and scalable manner?


File system organization


When thinking about the file system, there are usually two aspects to consider. The first is the file system data structures. In other words, what types of disk structures are used by the file system to organize its data and metadata? The second aspect is access methods: how can a process open, read or write to its structures?

Let's start with a description of the general organization on the disk of an elementary file system.

The first thing you need to do is divide your disk into blocks. The commonly used block size is 4 KB. Suppose you have a very small disk with 256 KB of memory. The first step is to divide this space evenly, using the size of your block, and assign a number to each block (in our case, denoting blocks from 0 to 63):




Now let's break these blocks into different regions. Let's set aside most of the blocks for user data and call it a data area. In this example, let's fix blocks 8-63 as our data area:


If you noticed, we have placed the data area in the last part of the disk, leaving the first few blocks for use by the file system for other purposes. In particular, we want to use them to track information about files, such as the location of the file in the data area, its size, owner and access rights, as well as other information. This information is a key part of the file system and is called metadata.

To store this metadata, we will use a special data structure called an index descriptor (inode). In the current example, let's set 5 blocks as indoe and call this area of the disk the inode table:

Inode indexes are usually not very large, for example 256 bytes. Thus, a 4 KB block can contain about 16 inodes, and our simple file system above contains only 80 inodes. This number is actually significant: this means that the maximum number of files in our file system is 80. With a larger disk, you can certainly increase the number of inodes by directly translating them into more files in your filesystem.

There are still a few things left to complete our file system. We need to keep track of whether inodes or data blocks are free or distributed. This distribution structure can be implemented as two separate bit arrays, one for the inode and the other for the data area.

A bit array is a very simple data structure: each bit corresponds to whether an object/block is free (0) or used (1). We can assign an index bit array and a data area bit array to their own block. Although this is unnecessary (a block can be used to track objects up to 32 KB in size, but we only have 80 inodes and 56 data blocks), it is a convenient and easy way to organize our file system.

Finally, for the last remaining block (which, coincidentally, is the first block on our disk), we need a superblock. This superblock is a kind of metadata for metadata: in a block we can store information about the file system, such as the number of inodes (80) and where the inode block is located (block 3) and so on. We can also put some identifier for the file system in a superblock so that it is clear how to interpret the nuances and details of various file systems (for example, we can note that this file system is based on Unix, ext4 file system, or possibly NTFS). When the operating system reads a superblock, it may have a diagram of how to interpret and access various data on the disk.

Inode Index Descriptor

So far, we have mentioned the inode data structure in the file system, but have not yet explained what an important component it is. Inode is short for index node, and is a historical name given in UNIX and earlier filesystems. Almost all modern systems use the concept of inode, but they can be called differently (for example, dnode, fnode, etc.).

In essence, an inode is an indexed data structure, that is, specific information that allows you to go to a specific location (index) and find out how to interpret the next set of bits.


A specific inode is referenced by a number (i-number), and this is a low-level file name. After receiving the i-number, you can view its information by quickly navigating to its location. For example, from the superblock, we know that the inode area starts with an address of 12 KB.


Since the disk is not byte-addressable, we need to know which block to access in order to find our index. Using fairly simple math, we can calculate the block ID based on the i-number, the size of each inode, and the block size. Subsequently, we can find the beginning of the inode inside the block and read the desired information.



The Inode contains almost all the necessary information about the file. For example, is it a regular file or directory? What is its size? How many blocks are allocated to him? What permissions are granted to access the file (i.e. Who is the owner and who can read or write)? When was the file created or was it last accessed? And many other flags or metadata about the file.


One of the most important elements of information stored in an inode is a pointer (or a list of pointers) where the necessary data is located in the data area. They are known as direct pointers. The concept is good, but for very large files you may run out of pointers in a small inode data structure. Therefore, many modern systems have special indirect pointers: instead of going directly to the file data in the data area, you can use an indirect block in the data area to increase the number of direct pointers for your file. Thus, the files can become much larger than the limited set of direct pointers available in the inode data structure.



Unsurprisingly, you can use this approach to support even larger data types by using double or triple indirect pointers. This type of file system is known as having a multi-level index and allows the file system to support large files, for example, in the range of gigabytes or more. Common file systems, such as ext2 and ext3, use multi-level indexing systems. Newer file systems such as ext4 have the concept of extents, which are somewhat more complex pointer schemes.

Although the inode data structure is very popular due to its scalability, a lot of research has been done to understand its effectiveness and the extent to which multi-level indexes are needed. One study showed some interesting measurements in file systems, including:

* Most files are actually very small (2 KB is the most common size)


* The average file size is growing (by almost 200k on average)


* Most bytes are stored in large files (several large files take up most of the space)


* File systems contain many files (on average almost 100k)


* File systems are about half full (even as disks grow, file systems remain ~50% full)


* Directories are usually small (many of them have few entries, 20 or less)


All this points to the versatility and scalability of the inode data structure, as well as how it perfectly supports most modern systems. Many optimizations have been implemented to improve speed and efficiency, but the basic structure has changed little recently.


Elon Musk fired his assistant after asking for a raise

 I have deep respect for Elon Musk, even though he is criticized and called "a marketer who left on NASA's best practices." Even if such criticism is justified, humanity lacks a healthy drive and a desire for something far away. After all, this is the first person in history who is simultaneously trying to colonize the Solar System, make electric cars mainstream and expand the potential of the brain (and zuch more). But those who are close to him have to pay for these far-sighted goals.

Numerous stories from employees and partners show that Musk can prioritize his work at any cost - even to the detriment of his professional and personal relationships. Just about one such instructive case and read under the cut.


Fast forward to 2014, in which Musk's personal assistant, Mary Beth Brown, decided to ask for a raise. Brown worked for the modern Tony Stark for 12 years, commuting back and forth between Los Angeles and Silicon Valley every week, working late into the night and even on weekends.

She made Musk's work schedule for two companies (SpaceX and Tesla), was engaged in public relations and often helped Musk make business decisions. She was like an extension of the Mask. At least, that's what she thought.

Ashley Vance in her book "Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future" mentions this story and describes Brown as follows:

Brown-or MB, as everyone called her-became a loyal assistant to Musk, creating in real life a semblance of a relationship between Tony Stark from Iron Man and Pepper Potts.

If Musk worked twenty hours a day, so did Brown. Over the years, she brought Mask food, made business appointments for him, spent time with his children, picked up his clothes, handled press inquiries and, if necessary, pulled Mask out of meetings so that he would keep up with the schedule. As a result, she became the only link between Musk and all his interests and was an invaluable assistant for the company's employees.

Ashley also noted that Brown made an outstanding contribution to the development of SpaceX's early culture, as she paid close attention to every detail and helped balance the atmosphere in the office.

In early 2014, Brown turned to Musk with high hopes and asked for a salary increase. To be precise, she wanted to receive compensation, like the top managers of SpaceX. Musk, in response, suggested that she take a vacation for a couple of weeks so that in her absence, he would realize the value of Brown's duties. In other words, he wanted to understand how indispensable Brown was to him.


"I told her, listen, I think you're a very valuable employee. Maybe this compensation is correct. You need to take a two-week vacation, and I'll evaluate whether that's the case or not," Musk said, according to Ashley's footnotes.

Brown took a two-week vacation, and Musk took up her work. After two weeks, when Brown came to the office, Musk told her that he no longer needed her services.

When she returned, I concluded that this relationship would no longer work." Twelve years is a good time for any job. It will work great for someone.

It was a real shock for his assistant. After all, no one is fired for asking for a salary increase, right? In the book, Ashley writes that this unceremonious event amazed people inside SpaceX and Tesla and confirmed rumors about Musk's cruel stoicism and lack of empathy. Musk claims that he offered Brown another job with the same salary, but she refused and left the company.

Now the question arises — why did Musk fire his assistant, who did everything right for him for a long time? Is it really because she asked for more money? Or has Musk realized that their relationship is no longer working? Or is it that Brown was unable to appreciate/understand Musk, despite having worked with him for twelve years?

From Brown's point of view, it is obvious that she was confident in her contribution to the companies that Musk managed, and therefore wanted Musk to treat her as the best player on his team. Musk, on the other hand, looked at this issue from a very realistic point of view, ignoring all of Brown's previous contributions.

Thus, in just two weeks, Musk discovered that he no longer needed Brown's services. That is, for the Mask - Brown could not become irreplaceable.

If you agree with Musk's decision to fire his assistant, it will teach you how important it is to make yourself as indispensable as possible before asking for a pay raise. On the other hand, if you side with the assistant, this is a valuable reminder that you should not tolerate underpayment and underestimation for years.

The story about Brown's dismissal was refuted by Elon Musk. On August 11 , 2017 , he tweeted:

Of all the fake stories, this one bothers me the most. Ashley never checked this story with me or my assistant. This is complete nonsense. Mary Beth has been a wonderful assistant for over a decade, but as the company grew, this role required several specialists instead of one station wagon.

For those who think that Musk is a saint who can't do wrong, I have to say that in reality Musk — like all other cool businessmen - ruthlessly fires people who disagree with him or hinder him (according to Tim Higgins' new book "Power Play: Tesla, Elon Musk, and the Bet of the Century").

If anything, it's just a joke, not a call to write bad code.

Whatever the real reason, I think this story teaches us some valuable lessons. We should not work in companies where our contribution goes unnoticed. Again, we shouldn't take our job (or our boss) for granted. After all, it doesn't matter how many years you have worked in the company or how close you are to the management — you can be replaced at any time. Keep this in mind and attach more importance to your personal life than the company you work for.

And, of course, before you ask for a salary increase, do not forget to give your lead at least a dozen reasons (if not more) why you deserve it. Moreover, it is better not to work in those companies where you need to ask for a raise.

Fuel from space debris

 In the sci-fi thriller "Gravity" (2013), an American astronaut finds himself in outer space after the destruction of the ship due to the fact that Russia explodes a spy satellite with a rocket and creates a rapidly expanding cloud of space debris. Ironically, this scenario has recently been repeated in reality, when Russia shot down an old Soviet satellite, as part of tests of an anti-satellite missile. The probability that debris from space debris can penetrate a spacesuit during a spacewalk is usually 1 in 2700, but Russian tests have increased this risk by 7%.

Space debris is a danger to active satellites and spacecraft. Presumably, Earth's orbit will become impassable when the risk of collision is too high. Today, when most of the space debris is cataloged, there is no particular problem with this so far. All the world's space powers scan outer space for the presence of debris, of which there is a lot in low orbits: idle satellites, upper stages and spacecraft debris. It is very difficult to solve the problem of space debris quickly because of financial and political problems. Old satellites that have served their time should either be introduced into the Earth's atmosphere for disposal at the "spaceship cemetery" in the Pacific Ocean, or put into a "burial orbit" if the device is far from Earth.

Scientists have asked the question: why not develop a spacecraft that will dispose of space debris directly in space? And there is a prototype of such a device. The idea of creating a spacecraft for the disposal of space debris is based on the processing of space debris into fuel.

Not to litter will not work

Space debris is non-functioning artificial objects in near-Earth orbit that no longer perform a useful function. These include non-functional spacecraft and launch vehicle stages, as well as ih debris, color spots, solidified liquids ejected from the spacecraft, and unburned particles around solid-fuel rocket engines. NASA has announced 20,000 artificial objects in orbit above the Earth, including 2,218 active satellites. As of January 2019, there were 128,000,000 pieces of debris smaller than 1 cm in orbit, about 900,000 pieces between 1 and 10 cm in size, and about 34,000 pieces larger than 10 cm. Meteoroids in Earth orbit should also be added to artificial debris, which can be grouped with artificial debris and increase the risk of collision. This poses a danger to spacecraft: even the smallest objects cause damage, especially to solar panels, telescope optics and star trackers, which cannot be easily protected by a ballistic shield.

Over the years, the Earth's orbit has become more and more littered. According to the European Space Agency (ESA), humanity has launched 12,170 satellites since the beginning of the space age in 1957, and 7,630 around them remain in orbit today, but only about 4,700 are still operational. This means that almost 3,000 non-functional spacecraft are flying around the Earth at great speed along with other large and dangerous debris. For example, the orbital speed at an altitude of 400 kilometers (the altitude at which the ISS operates) is 27,500 km/h. At such speeds, even tiny fragments of debris can cause serious damage to the spacecraft. According to ESA estimates, there are at least 36,500 debris larger than 10 cm wide, 1 million objects from 1 to 10 cm across and more than 300 million objects from 1 mm to 1 cm in size in the near-Earth orbit.



The "cascade effect" (Kessler syndrome), which in the long term may arise from the collision of objects and particles of space debris, can be considered to be already making itself felt, although the cataclysm of the scale of "Gravity" is still far away. Evidence of this may be the collision of two satellites with each other. The most famous such incident occurred in February 2009, when the non-functioning Russian satellite Kosmos-2251 crashed into the operational communications ship Iridium-33, forming over 2,000 fragments.


Under the existing conditions of clogging of low Earth orbits, when measures to reduce man-made space clogging remain only theoretical, the cascade effect can lead to a catastrophic increase in the amount of space debris in low orbit, and as a consequence, to the practical impossibility of further space exploration.


General cleaning


As the problem escalates, organizations around the world are trying to find solutions - from magnets to "space claws" and harpoons. There are different ways to counteract space debris: crushing of large space debris, removal of debris from orbit or removal of a spacecraft from the orbit of debris, knocking down debris with a laser or processing it into fuel. It is not possible to use only one counteraction method for all types of garbage. For example, it is impossible to catch small space debris with a net, and it is useless to stop large space debris with gas.

Basically , there are two directions for combating space debris:



crushing of space debris directly in orbit;

deceleration and removal of large space debris from low orbits for subsequent combustion in the atmosphere or removal of space debris from geostationary orbit to a burial orbit.

Moreover, both methods have disadvantages associated with the formation of fragments of a smaller fraction, the fall of unburned debris to the Ground and the clogging of higher orbits.


The easiest way to clean up outer space is to suspend space activities for a decade until the Earth's gravity does its job, but then humanity will stop developing. If nothing is done, then at the current rate of growth of space activity, soon we will simply not be able to launch spacecraft due to debris in orbit and will also stop developing.



The American company Cislunar Industries is developing a space "foundry" for melting debris into homogeneous metal rods. And the propulsion system from Neumann Space can use these metal rods as fuel — their system ionizes metal, which then creates thrust to move in orbit. It's like making a gas station in space. The SCM processes garbage into fuel, which allows the spacecraft to gradually ascend to higher orbits, up to the burial orbit (over 40 thousand km), clearing outer space.


Most space propulsion systems use gas as fuel. Even in liquid form, fuel takes up a lot of space and is not suitable for space travel. And if there is a problem, as happened with the mission of the Challenger spacecraft, the results can be disastrous. It is better if the propulsion system will run on solid fuel, which is much safer than explosive liquid or gas.


Electricity is applied to metals such as titanium or magnesium, or to any solid conductive fuel rod, to produce plasma and burn the charged gas through the rear of the engine, creating thrust.


Simplified scheme of the Neumann engine



Paddy Neuman himself


The author of the project, Dr. Paddy Neuman, as a student, participated in a project on plasma diagnostics, which consisted in diagnosing how hot it is, how dense it is, how fast it moves, etc. Analyzing his results, he was able to determine the average effective plasma velocity, which was 23 km/s. He said that you can make a rocket out of it.


One of the efficiency metrics that engineers like to talk about in this field is called specific impulse. The specific impulse is, in fact, the amount of push that can be obtained from a given fuel weight. Thus, a higher specific impulse means that the fuel is used more efficiently. This is just one element that should be taken into account when developing a space engine. In addition, since it is very expensive to put something into orbit, it is very convenient to have fuel that allows you to do this with a smaller mass or volume. The specific impulse is measured in seconds. When Dr. Neuman began testing his engine, the existing ion engines produced 3,500 seconds of specific impulse. NASA's HiPEP experimental system can work a little better, 10,000 seconds. After testing several different fuels, Dr. Neuman published his results: magnesium as fuel and had a specific impulse of 11,000 seconds. So, three times better than what is used today.

Although the Neumann engine will not be able to compete with chemical-fueled internal combustion engines to take the ship into space, it can be installed on smaller ships or satellites to keep them in orbit. The Moon and the Sun will always pull the satellites a little behind them, so a small engine will be needed to keep them in the correct orbit.

Last year, Neumann Space received $2 million in seed funding from government grants. They say they plan to test the Neumann engine in space in the near future.


Now a kind of ecosystem is emerging in near-Earth outer space. In this ecosystem, as in any other, there are "creatures" that "live", "feed", perform their functions and, "dying", give food to other creatures. And the creatures that "feed on carrion" can and should become space debris collectors in the broadest sense of the word.


Taiwan's Problem: Why has the worldwide microchip workshop become problematic?

 The year 2020 has hit humanity not only with the Covid-19 epidemic. The global economy, as well as consumers of electronic devices, has been shaken by the microchip crisis. They became desperately lacking, prices rushed to unknown distances. Once publicly available — just pay! - the devices suddenly turned out to be a shortage, which we have to chase and which we have to "get". Just like in the old Soviet times.


The reasons are generally quite simple. The need for microchips and processors has already continued to grow over the past years — but the pace was approximately understandable, and production could be increased smoothly. The coronavirus pandemic has made the growth in demand for electronic devices that allow remote work, entertainment and communication explosive and unprecedented.


Production was increased in all possible ways, especially since it became possible to earn much more on this — but a problem arose. Microchip manufacturing plants are the most expensive, complex, demanding production personnel and equipment of all currently existing in the global economy. You can't stick them in quickly and anywhere, like a toy assembly shop or tailoring, or even an oil refinery. It takes at least three years and billions of US dollars to build, launch and reach normal capacities of new chip production from scratch, even under the most favorable conditions. So there was a gap in supply and demand for microchips, which turned out to be impossible to close quickly and simply. By the beginning of 2021, the shortage of chips on the world market was about 30%.


The situation has been aggravated by politics — and the importance of this factor may in the coming years turn out to be such that the current crisis of chips for production and consumer reasons will turn out to be a minor misunderstanding against the background of a full-fledged global catastrophe.


In 2020, the United States and China finally and clearly entered the political clinch, escalating into Cold War 2.0. The economic and trade war of the two most powerful economies is already in full swing — and also hits the chip market. Back in the summer of 2020, the United States banned Chinese Huawei from outsourcing chip development to Taiwanese TSMC: the absolute global leader in this field. Other sanctions measures followed.


TSMC was caught between two fires. The company's management chose the US side. The problem for the entire global economy — and politics — turned out to be that most of TSMC's production facilities are located in Taiwan: an island off the coast of mainland China. Now more than 50% of the world's products are produced there, and the most advanced and advanced - even under 90%.


TSMC chips, mainly made in Taiwan, are present in almost everything: smartphones, high-performance computing platforms, PCs, tablets, servers, base stations and game consoles, IoT devices, digital consumer electronics, cars and almost all weapons systems built in the twenty-first century. For decades, this suited everyone - but before our eyes, the situation has changed dramatically. New factories in Taiwan - despite the acute shortage and the ideal infrastructure for them on the island — are not even planned.


Officially, even from the point of view of the United States, Taiwan is part of the PRC. However, the island still has its own political system, the "Republic of China", which does not obey Beijing and dates back to the anti-communist forces that evacuated to the island in 1949. The mood of its residents for the second decade has been moving further away from the idea of reunification with the mainland, which was popular by the 90s - especially after the "tightening of the screws" in China and the defeat of protests in Hong Kong. Residents of Taiwan now prefer the traditional patronage of Washington and cooperation with Tokyo.


Beijing, on the background of its confrontation with the United States and neighbors from India to Japan, increasingly wants to subdue the "diamond" of global microelectronics that is eluding "reunification". He is spurred on by the realization that the factors that provided the Chinese economic miracle are already in the past, growth is declining, citizens demand freedoms instead of tightening the screws, and plans for domination in the XXI century may turn out to be as impossible a dream as the world communist revolution for the USSR. There are more and more competitors standing in the way of unrestrained economic expansion, as well as political opposition from the United States and other countries.


Therefore, now the PRC is strengthening its military power, especially the fleet, and unequivocally hints at the possibility of a forceful solution to the issue with the "rebellious island" through a rapid amphibious operation. Taiwan is no less clearly demonstrating its readiness to repel the invasion. He is supported in this by the United States, Japan, and other opponents of Beijing with its ambitions of domination in the "south seas" and the global economy.


Not only the war, but even a serious aggravation around Taiwan can put supplies from the island of microchip semiconductors that are critical for the modern world at risk, fraught with a worldwide economic crisis of a huge scale. The United States has already deployed a contingent of military advisers on the island, which is reported loudly and publicly. Washington and its allies are clearly determined to prevent a situation in which the flagship of global chip production will be in the hands of China and President Xi. It is quite possible to assume a situation in which — in the event of war and invasion - TSMC plants will be destroyed quite deliberately and purposefully.

Secondly, the island has been hit by a changing climate.

Taiwan is considered an extremely humid place, it is flooded with monsoon rains and typhoons. However, in 2020-21, Taiwan experienced the worst drought not seen in more than half a century. The 2020 monsoon simply did not come. By the spring of 2021, rivers began to dry up on the island and water reserves in reservoirs were coming to an end. And plants for the production of modern semiconductors for microchips require huge volumes of ultrapure water. It used to be enough - now not so much.

It got to the point that by the summer of 2021, the authorities had to take radical steps: throw the remaining water supplies to semiconductor factories, cutting off its issuance to citizens and even farmers - despite the fact that agricultural production on the island was on the brink of disaster. Only these draconian measures prevented the decline in the production of semiconductors, from the lack of which the world microelectronics is already suffocating.

The monsoon of 2021 turned out to be more decent, and somewhat eased the situation — but no one will give accurate forecasts for the next years. The climate becomes very unpredictable with a general tendency to increase aridity. And there are simply no powerful rivers that allow organizing an uninterrupted supply of water even in dry times on a small island.

Therefore, TSMC has already announced the construction of a new plant for the production of the latest minimum-dimensional chips in Arizona in addition to the existing one in Washington state. The Americans themselves are going to deploy a program of subsidizing the semiconductor industry on their territory — but so far the amounts for this look conservative and modest relative to the scale of the problem, only a few hundred billion US dollars. And the production facilities in the USA are still mostly lagging behind the Taiwanese ones in terms of the quality and dimension of the chips.


The Europeans also do not stay away from the problem of microchips. In the summer of 2021, EU countries announced a goal to double their share in the global microchip market by 2030. Intel is going to build a new semiconductor plant in Europe worth $20 billion, and is considering Germany, the Netherlands, France and Belgium as potential production sites. The EU's problem is that their capacities are very outdated. The best factory in Europe is Intel's enterprise in Ireland, which collects 14-nm chips, whereas in Taiwan they are already making 3 nm, and Samsung and IBM just yesterday announced a technological breakthrough and overcoming the threshold of 1 nm.

The PRC's plans are also by no means limited to the idea of achieving the subordination of Taiwan. In addition to the two TSMC plants in the country, which are facing increasing problems, in 2021 Beijing decided to allocate one and a half trillion US dollars for the deployment of its own production and import substitution of technologies by the efforts of Hauwei, Alibaba, SenseTime companies instead of increasingly inaccessible solutions from IBM, Oracle, EMC. There is a campaign to lure TSMC engineers, who are promised a two- or three-fold increase in salaries on the continent — to which the Taiwanese company itself responded with a sharp increase in employee salaries, and the government of the Republic of China banned the publication of vacancies in the PRC.

We decided to make a breakthrough in the field of chip production in South Korea — where previously Samsung and other chaebols preferred purchases from the same TSMC. Now Seoul wants to deploy a full production cycle in the country in the coming years. However, South Korea is also located near the PRC, and in the north it is threatened by nuclear missiles of northern cousins friendly with Beijing.

The government of India, which has its own powerful IT sphere, also decided to take care of competition with its Himalayan neighbors - relations with which New Delhi is getting worse, and clashes on the border are becoming more frequent. Just this week, India announced the allocation of $ 11 billion to attract chip manufacturers to the country.

However, many economists question the distribution of microchip production outside Taiwan. In their opinion, if — which is still more likely - the military crisis around the island does not happen, the enormous forces and resources of many countries will be thrown to the wind, and when the chip market normalizes again, it will already come to excess supply.


Time will tell who is right in this dispute. But still, the idea of not putting eggs in one basket, especially lying in a fire-hazardous building, seems wiser.

Noctua, in partnership with Drop, is releasing keyboard caps in their classic color palette

 

Noctua collaborated with Drop to release a key set of Cherry MX switches. The keycaps are manufactured using Noctua's corporate color scheme. There are a variety of kits to choose from for different keyboards and layout factors. Pre-orders have already started, and prices start at $115. The basic key set includes all the keycaps required for a standard American layout keyboard and several additional keys for customization. For each group, users will receive a keycap with the Noctua logo and fan image. 


In addition to the basic kit, you can also order keycap kits for digital modules for use in Colevrak, Ortho, ISO UK and other layouts. There is also a set of separate space keys with different keycap sizes. The provided kit allows you to install the keycaps into various custom keyboard configurations. 

The keycaps are made of double-layer ABS plastic, which is fade-resistant and durable. In addition, the button adopts an arc-shaped design, which makes the contact between the fingers and the button more comfortable. It should be noted that the keycaps use the proprietary color scheme of Noctua fans.

The Oxford Institute found an association between fun and mental well-being

 Video games can be beneficial for mental health, recent evidence from the University of Oxford demonstrates. As scientists have found, people who play video games for a long time, in most cases, experience themselves more happy than those who do not.


The study used data that Oxford professionals earned through Nintendo and Electronic Arts. Nintendo gave information about the time that investors cheated in Animal Crossing: New Horizons; EA, in addition to the time provided, shared yes the achievements and behavior of investors in Plants vs Zombies: Battle for Neighborville.

This data was combined with the results of a survey in which the players considered their location and well-being. The survey involved 3274 people, always over 18 years old.

Educator Andrew Przybylski, who was in charge of the study, said he was stunned by the results.

“If you play Animal Crossing for four hours a day, you're probably feeling fundamentally happier than someone who doesn't,” he said. "These numbers are at odds with past research, which has shown that the longer people play, the more unhappy they feel."

The reader has suggested that one of the reasons for the discrepancy may have been the social peculiarities of Animal Crossing and Plants vs Zombies, in which investors interacted with other people.

“I don’t think people will spend a lot of time to play with the public aspect if they don’t like it,” declared Przybylski.

That said, people who feel “compelled” to present themselves — for example, to avoid stress in other areas of their lives — reported feeling less satisfied.

The peculiarity of this research was the application of realistic data on the time spent in games. The Oxford Institute was able to combine the results of the polls with the truthful information given by the publishers. In previous studies, players, in most cases, judged how much they played. Scientists think that this criticism may exist inaccurate.

Experimenters strive to emphasize that the acquired materials are not carte blanche for the sake of games.

“I’m sure if the testimony continues, we will be testing about the toxic aspects of video games and find enough evidence of their existence.”

For example, they orientate in Oxford, in the search they studied only two fun for all ages, and other fun may be less useful. The uncle's disposition for the game affects the mental impact it has. Scientists point out that "internal" well-being seems to be the main fundamental moment - sometimes a person plays a game, elementary causation is fun.

“Our results show that video games are not unconditionally harmful to your health. Consume the rest of the mental factors that dramatically affect human well-being. In fact, the piece of iron can become an activity that unconditionally affects the mental well-being of people, and regulation of video games may lose investors of these benefits, ”the researchers concluded.