After more than a decade, Intel threw off the advantage of its general competitor, AMD. Jeremy Laird tried to figure out where Intel came up for the wrong end and how the enemy would pay off.

 And exactly what happened to Intel? The once irrelevant athlete in the manufacture of processors and microcircuits immediately yields to competitors in almost absolutely all possible indicators. CPUs through AMD turned out to be more thoughtful, and manufacturing technology through TSMC is more efficient. It looks like Intel has absolutely gone astray.


Already for the bazaar of mobile PCs, where the culprit has been the irrelevant leader for decades, Intel processors dropped the freshly blended Renoir through AMD.

Circumstances are so bad that Apple has announced schedules to end its relationship with the manufacturer and start bleeding ARM-based personal chips. Worse, rumor has it that Intel itself is planning a partnership with TSMC for the sake of releasing separate product options once, including the first custom graphics card. Indeed, this can become a complete humiliation for the sake of the organization.

Or is it just speculation? Against all the difficulties, it happened that Intel earned an unprecedented amount of money for a year - 72 billion. Ultimately, the general problem of the manufacturer is that it does not keep up with the dynamics of demand from the side of the so-called hyperscale data centers. These are companies like Amazon, Microsoft, Google, Facebook, and others, which simply lack a sufficient number of Xeon processors. However, consuming significant began to believe that soon Intel would return to its previous rut ​​in comparison with the manufacture of chips and CPU microarchitectures.

How can one attest to the troubles and misfortunes of Intel in a nutshell? “10 nanometers,” I would say. And it's not so much the failure of chip technology - such arguments can be thrown down for any microarchitecture-developing company that has been resting on its laurels for decades. But 10 nanometers! This is a catastrophe.

By this “10 nanometer” consideration, we mean an electrotechnical move or assembly used to make computer chips. Theoretically 10 nm is the size of the smallest ingredients inside the chip. However, for practice, the names related to scientific and technical processes, and the actual dimensions of the components, for example, the gates of transistors inside the desktop processor, have ended up being compared from each other recently. And, most quickly, there is no such component inside the Intel processor, the size of which is positively 10 nm.

This lack of unconditional bondage between the size of the ingredient and the image of the site stops more problematic, sometimes the skill touches the comparison of scientific and technical processes of rival manufacturers. But more on this later. And now we are interested in Intel's 10nm process and its disadvantages. It was originally expected that it will be implemented back in 2015. Immediately now, the mother of my children is 2020, however, the set of provisions with 10nm chips is small. You will not be able to purchase desktop PCs or server CPUs built for the above technical process. Exceptionally mobile processors for the sake of laptops and tablets switched to 10nm technology, of course, only those with low and ultra-low power consumption. Others have upgraded to 14 nm.

These facts need to be analyzed taking into account the generally accepted stereotype of Intel itself - Moore's Law, and therefore it is necessary to consider the opposition to the laws of physics, which the creators of microcircuits have encountered in recent years. However, even huge adversities in the production of semiconductors can arise, sometimes separate transistors will reach the size of a handful of atoms and will obey obscure photonic effects, for example, tunneling. But this is a completely different story.

Probably all Intel's problems are combined with excessive ambition, moral obsolescence of separate production technology, perhaps complacency and lack of investment.

Intel's general manager Bob Swan said Intel’s difficulties with 10nm technology are “kind of a conclusion to what we have done in the past. Then we tried to conquer Victoria no matter what. And when extremely weighty times approached, we set even more fundamental goals. That is why it took us a long time to acquire them ”.

ncreased expectations from microcircuits

On the 10-nm scientific and technical section, this fundamental setting means the deepening of the density of transistors by 2.7 times. Rather, per unit area of ​​the crystal in a 10-nm node, there are 2.7 times as many transistors as in a 14-nm node. More specifically, the processors made after the 14-nm process technology hold 37.5 million transistors for a square millimeter, while as if in some square millimeter of 10-nm crystals 100 million are held. The main deepening of the density of transistors implements 10-nm technology significantly more fundamental in comparison with other technical processes.

Deepening the density by a factor of 2.5 and the passage through the 22-nm to 14-nm patterns were impressive, however, the passage from 32 nm to 22 nm led the deepening of the density by 2.1 times, and the transition from 45 nm to 32 nm - by 2, 3 times. Representation of the nature of these changes helps to chew on the differences between Intel nodes and nodes of rival vendors. For example, Intel's 10nm design assumes a saturation of 100.8 million transistors per square millimeter. This ratio is slightly higher than the TSMC ratio - 96.5 million transistors (later TSMC announced 113.9 million transistors for a square millimeter for an improved 7-nm process technology). All three 7nm sections from Samsung do not reach the 100 million mark.

The thing is that 10-nanometer development through Intel was infinitely principled - so much so that in 2017 the fraternity added a note "Hyper Scaling" to indicate the increased density. In retrospect, it is possible to argue that the expectations were too high. This is because Intel has prepared the final section for the DUV flow lithography base. In a nutshell, the size of the ingredients in a microcircuit is determined by the length of the optical wave used in lithographic processes. These processes etch the ingredients onto a given surface of the silicon wafer, and computer processors are cut from the silicon wafers.

This is not a bisector of dependence. The effect of yes can be shown by various techniques and auxiliary options, for example, masks used are valid in the property of a multiplier, which shorten the dimensions of the ingredients further than the practical optical wavelength.

DUV Chip Cooking Equipment utilizes 193 nm swell UV radiation. However, to consume the limit of the relatively density of the location of the transistors near the provided wavelength. Intel has exceeded that limit.

As a result, there is an unpleasant delay in production for almost five years. It's eternity in terms of Intel volume dynamics and Moore's Law. Already immediately consume the symptoms that the 10nm process is not classified what it is forced to be. So Ice Lake, the freshly baked tenth generation mobile processors, accelerates slower than their 14nm predecessors. Ice Lake's fastest 10nm processors, the Core i7-1065G7, generate a lot of haste around 3.9GHz, while the eighth generation Core i7-8665U is faster for a solid 900MHz. This is a hell of a lot, and therefore, for some reason, the production process goes wrong.

Another proof that the 10nm process did not live up to Intel's expectations is the duplication of weak 10th generation processors. Along with today's Ice Lake CPUs, the newly minted Comet Lake family is descending, and both belong to the 10th generation.

Similar to Ice Lake, Comet Lake mobile processors fit in low-power and ultra-low-power formats.

But in recognition through Ice Lake, Comet Lake utilizes 14nm, but not 10nm process technology, and extends to 6-core modifications around a high clock speed of 4.9 GHz.

As a result, already now you are offered the opportunity to purchase a subnotebook with a processor bearing the Intel 10th generation mark, however, what is inside the box may differ from the declared one. If the calculator is 2 or 4-nuclear, it is given the opportunity to exist weak or ultra-low-power. In addition, 10-nanometer or 14-nanometer. He is given the opportunity to build on the microarchitecture of 2015 Skylake ages or completely freshly baked - Sunny Cove, and also conform to Ice lake.

Microarchitecture difficulties

The Sunny Cove listing correctly brings us to another great failure of Intel - microarchitecture. Prior to the release of 10nm Ice Lake chips for ultraportable laptops last year, the horde of processors for desktops, laptops and servers relied on the 14nm process, which debuted in 2014, and the Skylake architecture, which emerged in 2015. Both have ruled a thousand times, however there were no chunky changes in the updates.

Above all, the dark phenomenon of the Nehalem microarchitecture in 2008, Intel could only recommend 4 cores for processors in popular PC modifications. This continued until the 2017 release of the Coffee Lake microarchitecture, an evolved version of Skylake, and a further increase to six cores. For about a decade, Intel has not increased the number of cores for commercially discarded product modifications.

In a little less than two years since the noble age, Intel raised the bar to 10 cores for popular desktop processors by releasing Comet Lake, a secondary updated Skylake tweak belonging to the 14nm processor family. It turns out that during 10 flights there were no shifts, and then a depression was created 2.5 times after the lapidary time interval. Nothing to do could lead to such a shrill increase in the number of nuclei after a long stagnation? The root cause for this is the origins of AMD's Zen architecture and Ryzen processors, the first of which ended in 2017. Without going into lengthy reasoning, Intel was lazy until, happily, it did not have a competitor.
Of course, with as many as ten cores, Intel is infinitely inferior to AMD, which currently invites 16 cores in the famous Computer with 3rd generation Ryzen processors. Their advantage is encompassed yes in that they are built for a 7nm process warehouse through TSMC.

In the mobile sector, Intel's circumstances are no better. The new line of 7nm mixed processors Renoir through AMD has eight Zen 2 cores with a capacity of 15 watts. Intel has managed to develop an exclusively 6-core Comet Lake Core-i7 10810U as a competitor. It is a processor with an exclusive clock speed of 1.1 GHz. The 15-watt Ryzen 7 4800U is packed with 8 cores and clocked at 1.8 GHz. Not a positive comparison.

Principle into the future

Here's a version of the charge. The years that have passed have not been technologically productive for Intel. George Davis, the company's chief commercial officer, spoke of the 10nm flop this way: “This electrotechnical site will certainly not be the best in Intel's history. It is less efficient than the 14nm process technology, and less efficient than the 22nm process technology. " But are the consequences of Intel's ongoing problems really so disastrous?

From the point of view of an economic perspective, for this question it is possible to respond positively - no. In the end, it is not so much the flowing arrangement, otherwise it is too bad, in fact, there is no difficulty at all. In 2019, Intel's earnings have delivered unprecedented numbers. Since mid-2018, the ages of its sales did not fall due to scientific and technical stagnation, and the manufacturer himself felt adversity with a satisfied view of demand for its 14-nm processors.

If you dig deeper, it is possible to come to your senses to the conclusion that along the last border the plot of difficulties is covered in the scientific and technical process. The number of cores in Intel server processors has clearly increased since the arrival of the 14nm era. Immediately Intel invites the whole 28 cores in some kind of processor die. that, the larger the cores in the same process, the fewer processors can be pulled out of one semiconductor wafer, which, in turn, can lead to supply constraints.

But, whatever one may say, Intel does not check any economic difficulties, but even this event seems to be the main reason, because the culprit might strike a grand protest against competitors in the design of products and technologies.

And this special effect is now visible. Ice Lake processors evolved into a freshly baked microarchitecture popularly known as Sunny Cove. It improves clock performance by 18% after comparing it to Coffee Lake, a refinement of the Skylake microarchitecture.

But even this is only a beginning. The decisive word in the renaissance of Intel's microarchitecture was the introduction of Jim Keller to the team, who led the category for microprocessor development.
Despite the fact that he is going to leave this rectifier after six months, it is impossible to underestimate the contribution, some of it can be transferred to the development of the company. Keller is one of the most respected, if not the most respected, microprocessor designer.

It became famous due to the development of the microarchitecture of the K8 processor, codenamed Athlon 64 and the main chip from AMD, which became a noble competitor to Intel. Later, Keller worked at Apple, creating photo designs for a series of in-house processors for the ARM warehouse, which then took the lead in the market for phones and tablets. In 2012, Keller returned to AMD, leading the development of the Zen microarchitecture and once again supplying AMD with devices to fight Intel. Following a laconic presence for the apex position of Tesla's electric vehicle builder, Keller took over as senior vice president of Intel in early April 2018.
Given the lag between the design and microarchitecture of processors and the launch of product marketing, it is highly unlikely that the new Sunny Cove cores inside Ice Lake processors are Keller's plow. This will undoubtedly be for the further post-Sunny Cove architecture of Willow Cove. It is planned to be released at the end of this age for the sake of a family of 14-nm backported processors, that is, using the "reverse transfer" of the freshly baked microarchitecture for the "old" technical process, Rocket Lake processors.

The Golden Cove microarchitecture will take an even bigger step forward and lay the groundwork for slated Alder Lake processors later in the year. But even Golden Cove has no way of conforming to a good Keller piece. To achieve the desired result, we need to wait for the Ocean Cove phenomenon, which will be exhausted in 2022 or 2023, however, Keller's quick withdrawal will mean that his influence on this calculation will probably be limited to some extent.

There is happily no overhead about Ocean Cove. Not long ago there were rumors that the productivity of the provided microarchitecture would be 80% higher than that of Skylake. Even though these are purely rumors, we know for sure that Keller has an ingenious formulary price list and that Intel has a monumental, cunning plan that is much larger than it did years ago. As Keller said: "We expect to multiply the number of transistors by 50 at a time and do our best to get the extreme out of each stack."

At the same time, the 7nm CPUs that follow the problematic 10nm processors will not face the same limitations as their predecessors. For the production of 7-nm processors, photolithography of the extreme ultraviolet spectrum (EUV) with a swell length of up to 13.5 nm will be used. Or, the 7nm process has completely changed. It will be seen, however immediately it is definitely possible to say that Intel's monitoring is extremely optimistic.
Intel plans to force the passage from 7nm to 5nm and beyond. From this we can conclude that the manufacturer will temperamentally improve the freshly baked technology in the antagonism of today's costly, even if this requires investments in research and development work. Above all, with the involvement of EUV lithography, Intel expects to return to the past rates of production - once every 2 years, activating the ages from the 7-nm process technology in 2021 and reaching the release of the 1.4-nm circuit in 2029. “I think EUV will help us get back to the pace with which transistors are increasing unanimously according to Moore's Law,” Davis said.

All this collectively assigned gives the impression that Intel is returning samples of the most advanced architectures and fastest processors. Whether this will happen is another question. Immediately, AMD is probably in a better position than Intel, despite the fact that the latter makes a lot of efforts. AMD's clever design of microarchitectures, including Zen 3 and Zen 4, together with TSMC's technology solutions, will help increase competition between the two manufacturers. However, we will not predict the defeat of Intel.
In the end, the final time, sometimes NetBurst and Pentium 4 ended their days and Intel's circumstances stalled, leading the Core family and primacy for the processor market during the decade.

A brief history of the FTP protocol

 It is one of the oldest protocols, some of which hold the mainstream web (it turns 50 next year), but these famous additions want to abandon it in the past. Now we will talk about the chronicle of FTP, a network protocol, some were held longer than many others.




In fact, this year, the Indian-born natural scientist of the MIT graduate school Abhay Bhushnan first came up with the File Transfer Protocol. FTP, which took shape two years after telnet, froze over as one of the prime examples of a functioning add-on package for a system that would be known in the future as ARPANET. It has overtaken e-mail, Usenet, and the TCP / IP store. In its turn telnet, FTP was formerly used, albeit to a limited extent. However, in today's Web, it has lost its relevance, in general due to security problems, and its place is borrowed by other encrypted protocols - if FTP is SFTP, a file transfer document that works on top of the Secure Shell (SSH) protocol, which has replaced telnet on a larger border.


FTP is so old that it appeared before email, but in the core it played the importance of an email client. Perhaps unsurprisingly, among most of the practical-level programs designed for the early ARPANET, FTP itself emerged and made its way into the world of modern technology.

The root cause of a given is combined to its basic functionality. In fact, this is a utility that simplifies the transfer of the provided between the hosts, however, the secret of its success is covered in the fact that at a certain stage it smoothed out the differences between these hosts. As Bhushan says in his Labor Proposal (RFC), the most confusing use of telnet at the time was that each host was slightly different from another.

"Differences in terminal characteristics are envisioned by the host system programs in consonance with standard protocols," he scribbles, mentioning both telnet and the remote instruction entry document of that era. "However, in order to dispose of them, you therefore need all sorts of conventions of remote systems."


Radioteletype video terminal of the ARPANET era.

The FTP document he penned tried to deflect the difficulties of easily connecting to the server by supporting a way he called "indirect use"; this order allowed to broadcast materials, that is, to carry out programs remotely. The "first build" of the Bhushan protocol, which is used decades later, albeit in a modified form, has utilized the directory texture to investigate the differences between separate systems.

In his own RFC, Bhushan writes:

I have tried to organize a user-level document allowing users and programs to implement indirect use of remote host machines. The document simplifies not so much actions with file systems, but also the creation of programs on remote hosts. This is done by finding the queries that are being cultivated by the active processes. Sequencing of transactions guarantees increased guarantees and facilitates post-error inspection. Representation of data images is injected, making it easier to interpret, reconfigure and contain the unsophisticated and limited options provided on individual hosts. The document is primordially drunk with extensions.


In an interview with the podcast Mapping the Journey, Bhushan revealed that he started developing the protocol because of the undeniable need for applications for the nascent ARPANET network, including the need for e-mail and FTP. These early additions were frozen into the fundamental building blocks of the progressive web and improved infinitely over the decades.

Bhushan said that due to the narrow-mindedness of computers of such slowness, the original functions of e-mail were carried out partially by FTP and made it possible to stretch letters and files over the protocol in a lighter format. And for four years, FTP was a kind of electronic mail.

“We asked: 'Why not add two commands to FTP named mail and mail file? “The mail setting will be used for plain text messages, and the mail file for email attachments, which we still have today,” he says in an interview.

Of course, Bhushan was not the only one involved in researching this solid early protocol, but after graduating from university, he earned a job at Xerox. The document he implemented extended its formation in his absence, earning a series of RFC updates in the 1970s and 1980s; In that amount, its implementation was impeccably in 1980, which made it possible to guarantee the support of the TCP / IP specification.

However, insignificant updates were different, so that the document could keep up with the other and could support freshly baked technologies, the version that we use today was usually missed in 1985, sometimes Eugene Postel and Joyce K. Reynolds came up with RFC 959 - the renewal of past protocols lying in the database progressive software for FTP service. (Postel and Reynolds, among other things, at the same time functioned flawlessly on the organization of domain names (DNS).) However, the document describes this version as “intended for the sake of correcting minor documentation errors, improving the clarification of some protocol functions and adding fresh additional teams, ”she herself froze.

Given its age, FTP has an abundance of fatal sore spots, some of which are still found today. For example, transferring a folder that stores many dwarf files is infinitely inefficient in FTP, it works much more preferable with large files, as it limits the abundance of sufficient separate connections.

In almost every sense, thanks to such an early phenomenon in the annals of the Web, FTP has influenced the structure of most subsequent protocols. It is possible to compare it with something that has been modified and improved every minute over many decades, for example, with basketball shoes. Yes, the Converse All-Stars is a good shoe, and in the right agreements it will work with honor today, but with a much greater chance of success, there will be some modification from Nike, probably around the Air Jordan brand.

File Transfer Protocol is the Converse All-Star of the Internet. He was transferring files to the point where it got cool, and still keeps the plot of his appeal.

“Nobody deserved it for the Internet. Faster on the contrary, a lot of money was spent on it. We fought bravely in this battle and knew that he had potential. if someone tells you that he knew what would happen next, it’s a lie. However, I created everything myself with my own eyes. "

So Alan Emledge, Lord Archie, conforming to the main search engine of the Internet, narrated the Internet Hall of Fame, because his invention, which allowed users to search for unsigned FTP servers for files, did not suit the rich. In short, the Internet was then a non-profit, and a graduate student and technical support staffer at the McGill Institute of Emledge in Montreal secretly disposed of the university's line for the Archie service. “But actually it was the best practice to do the same. As the old saying goes, it is preferable to ask for forgiveness than permission. " (Like Bhushan, Emledge was an immigrant, born and raised in Barbados, and came to Canada, freezing as a student due to his accomplishments.

NVIDIA pushed back the RTX 3070 retailer so as not to demand a rush

 NVIDIA announced the postponement of the launch of the merchant of its freshly baked RTX 3070 video cards. They will arrive in stores not on October 15, but on October 29. The provided resolution was adopted in order to have time to release the required number of video cards.


The question for the GeForce RTX 3080 turned out to be much more than expected. Video cards went on sale on September 17th. Users met with a shortage of supply, and in general the cost of news on eBay was from $ 1000 to $ 2500 instead of the initial $ 699. The mugs happened to deliver their forgiveness after the outbreak of chaos.

NVIDIA revealed its freshly baked Ampere graphics cards on August 31st. These are GeForce RTX 3090, RTX 3080 and RTX 3070.The RTX 3070 previously runs on GDDR6. The abundance of CUDA cores is 5888, the size of the video memory finishes off 8 GB. The culprit promises that the GeForce RTX 3070 guarantees the same or more noble performance than the GeForce RTX 2080 Ti (which is twice as expensive), and, roughly, 60% faster than the GeForce RTX 2070. The announced cost of the video card in the US is $ 499.

Agile besides idealism. Sometimes and in what manner plastic management functions. Political economy pamphlet

I think that not enough people would argue that the advanced plans of the software development industry, the IT departments of corporations, in their service utilize incremental-iterative approaches. Once upon a time, Agile has overgrown with a heap of idealism and quackery: coaches, mentors, motivators are disorderly people, they talk about how it is necessary to share in diagrams and diagrams in order to make their way to a corporate goal.


The installation of the provided note - to establish an idealistic disposition to Agile upside down - to materialistically explain, sometimes Agile works as if certain values ​​and principles actually function; some no Agile idealistic, some no materialistic.

Agile is like a religion

Often, coaches and books say that you need to acquire values ​​and principles - to change your mindset. Trust the pastor and you will be rewarded! ... Trust in a new creator! At the same time, of course, everyone is free to interpret these or other principles and values ​​independently. As a result, you need to exist, in other words, initiated into a new faith, that is to say, to understand what the material arguments of these or other postulates are. Instead of this in order to eavesdrop on preachers, attitudes and clerks should think and report to others what the material principles of the competitive superiority of the product provided are directly, but in fact it stops the object of service and pay

if, sometimes, the abundance of buyers for your product has a superbly noble amount, consuming the bisector is useful in throwing more utility through this product to customers. The greater this benefit, the higher its cost to the consumer and there are many such buyers.

For any Agile framework, it is essential to consistently earn feedback in order to increase the desirability of the product for the sake of users, to increase the audience. Agile management recognizes that the hierarchy of the company is not possible, therefore, about all the needs of the customer, but finds them by empirical search. Anyone who participates in the process is ready to become a source of value, which is why involvement is welcomed. The newly-invented course, the newly-found authority, is accordingly introduced in the shortest possible time, that is, it was invented and implemented for the sake of extracting a feedback. This, in turn, requires that the structure of conclusions be adaptive to changes, thus sticking out in the significance of the property of production, general capital.

Sometimes Agile works

In order for XP, Scrum, or any other Agile framework to function, there must be some great properties in the pure product. It should be a seeming commodity, the benefit of which is the result of the exchange process, after all to consume sales. Underneath this is not the genius of the sales clerk (the ability to sell for the expensive what is cheap), (a) not that the resulting product is emitted something like a copy, and the benefit depends on the truth through the numbers of the trader and the price. On the contrary, a product produced near a personal customer, sold once, is forced to cope with ordinary management - a waterfall.

Exclusively in the case of the manufacture of a conditional product (programs, games, TV series, cartoons, and even branded goods), some will find a tremendous abundance of times for themselves on the market, a metamorphosis of the production method happens, as a result, the education of prices and profits may not be obtained now from saved for employees, but from the consumer merits of the product replicated for exchange. A thorough discussion of the differences between a product and a conventional product, the establishment came for each of them, I made in the last article.

An investigation into the creation of a freshly baked manufacturing method stops the modification of the internal contradiction of management. If for the sake of the manufacture of products in order to extract a large one (surplus value), there should be a reinforcement of the exploitation of workers, because for the production of conditional products, marketing is forced to exist focused on increasing the maximum use value, which forms the benefit (multiplied value).

The enlightenment of making a conventional product encompasses another magnificent feature. For commercial production, the decree on the goods produced is at the disposal of the capitalist, that is, the kingdom - in other words, the lawyer who develops the cooperation of employees for the capital base. When a conditional product is produced, the conclusion about production passes in that amount into the hands of workers, causality is given them the opportunity to move a functional into the final product, some of which will increase the use value of the conditional product. In the incremental-iterative version of cooperation, it is not the costs for the unstable subsidized capital that are majestic, but the consumer qualities of the product and the abundance of consumers.

The next majestic difference between product marketing and virtual product stops

Back in the USA: HP starts collecting servers in America

 Hewlett Packard Enterprise (HPE) will be the lead manufacturer to return to white-label assembly. The Brethren announced a new campaign after making servos from parts prepared for the US area. HPE will take care of post-supply chain security for North American buyers through the HPE Trusted Supply Chain initiative. Aid is primarily designed for buyers from the government sector, healthcare and economic services professionals.


HPE explains that contrary to everyday opinion, harmlessness is not tied from the moment the equipment is turned on and operated, it is laid at the turn of assembly. That is why it is so essential to trace the supply chain, labeling and all other processes. Problematic well-acquired parts can include hardware and software backdoors.
Through the HPE Trusted Supply Chain initiative, government companies and the public sector will be able to acquire certified North American servers.

The HPE ProLiant DL380T print server will become the main product that meets all aspects of harmlessness. A sample of its components are made in the USA, however, it is already possible to declare the equipment for the category "Country of Origin USA", but not elementary about the North American production, marked with the marking "Made-in-USA".

Features of the freshly baked HPE ProLiant DL380T server:

Increased safety routine. The function is activated at the factory and allows you to increase the level of defense against cyber attacks. The routine will ask for the supplied pre-login authentication for the server.
Defense through the structures of a dangerous OS. UEFI Secure Boot is used to provide unlimited service with a factory pre-installed operator system.
Server configuration blockade. if the default options are changed, the construct will notify about it at boot. The function prevents other entanglement of third-party users.
Establishing an invasion. The target protects against physical interference. Server owners will receive a warning if someone tries to free the server skeleton or part of it. The function is constructive even when the server is off.
Special harmless delivery. HPE will provide a car or a driver if it is necessary to drop the print server steadily from the factory to the customer's data center. This makes it possible to ensure that there is no change in equipment by villains about transport systems.


For security and flexibility of supply

Panzootia Covid-19 has discovered a storefront of electrical ingredients and systems logistics problems. In addition, the operating and business processes of many enterprises, conscious of the production and supply of electronics, were disrupted. HPE decided to increase the abundance of supply channels in order to avoid bondage from one mash, that is, the country. And diversity and flexibility in the supply chain is a grateful immediate policy for the benefit of manufacturers around the world. That's why HPE develops a good product later, where it expects to sell it - the USA.

In the HPE team, HPE is to consume the site where the train with a special permit operates, in fact, it is here that it is planned to manufacture server hardware. Next year, a similar program is supposed to be prepared for the sake of Europe by launching production in one of the EU powers.

HPE Trusted Supply Chain is not the first HPE to scale up cyber security. First, the Silicon Root of Trust calculation was thrown. Its essence is in a harmless long-term numerical signature, which makes it possible to guarantee harmlessness in the organization of the sent management of iLO servers (Integrated Lights-Out). The print server does not load, if inconsistent numerical signatures of quilting, that is, drivers, are detected.

Looks like HPE will be the first in a string of huge companies returning to the "white assembly". The transfer of capacities from China was activated by the rest of the companies, moving the assembly strips from China to the Island thanks to the trading struggle between the Country and the PRC.

Newly Invented Macromarketing

 The panzootic coronavirus has not yet passed, but it has already significantly modified the nature of our life. People are frozen to distance the well-wisher from a friend and find a lot more at home. There are no other options, some companies are still classified and strangely sometimes open. A freshly baked reality has come and one must sometime adapt to it. Whoever fails to work out this will not find a space under the sun. Panzooty came unexpectedly and took people by surprise, but its consequences are not fully understood, and whoever knows how to comprehend them will be the first to be able to deprive competitors.


The first thing to do is to understand what exactly happened. The strongest chariot fell on companies calculated for a gathering of people. In fact, people were swept away after their homes. Many suffered, but the IT industry gained more only through this. The need to locate extensive and reliable communication channels has clearly increased. Above all, the IT industry has been the most preferable to adapt to the freshly baked reality.

Previously, professionals sat in the office - immediately they sit at home and do just that. But now you ask yourself - who is the correspondent of just what is happening? If this is not the IT industry - after all, this correspondent has infinitely miscalculated. It is clear that panzootic disease could have arisen naturally, but someone lobbied for the establishment of brutal measures to contain it on a global scale.

It seems to me that the IT industry itself does not understand before the end what consequences this can lead to. People have stepped up to distinctly exchange their attachments by denying them past standards and are in search of freshly baked standards of thinking. And this is lemon time for the Internet marketing industry. The old stencils are broken, you need to invent new ones, but you need to figure out which stencils will be ready to move the time.

It is likely that someone will continue to use the old, voluntaristic marketing technologies, but their dimness is that responsible investments are required to support them. These methods will be able to deprive them, which are relied on for the easy reactions of people. I'm talking about viral ads. But for this, the buyer must not be so much interested in the product, he must be interested in disseminating information about it. And for the given he is forced to receive from the given for some reason.

The network management modification will not work. It implies the direct relationship of the buyer in the property of the seller, and this is not enough for anyone. It must confuse people solely as advertisers. But when making a sale, the peddler is forced to earn a bonus. It quite really does work out not so much among online sales, but also offline. To achieve the desired result, everything is already invented - referral links. When purchasing without a referral hyperlink, the value must be greater so that the referral hyperlink can be favorably given. Almost no one uses a similar modification yet, who activates the main one - has a thorough method to change the market. Probably this will establish near the chaise and some tremendous sales ties that will not be able to adapt to this.

The sore spots of this modification are to consume - such is the likelihood of lightweight cash out. People will trust in the system only after that, sometimes they will be able to buy something with a premium. Moreover, immediately after the accrual. And the charges do not have the opportunity to exist for a penny, people accordingly understand that having overthrown the consumer they themselves will be able to buy something. Given that the positive interest in advertising spending is immediately put into the value of the product, the creation of this circumstance looks realistic. But in order to realize the idea, you will need to cut costs for aging macromarketing and invest in new one. It is better only to do this for tremendous organizations, but it is not easier for them to work out it only because of their slowness. And this is another opportunity.

It is possible without exception to lose your store, but simply transfer the service of monitoring sales, calculating bonuses and cashing out. The currently available payment systems are best adapted to achieve the desired result, but beginners also consume space. Nobody is doing the data yet. It is likely that under such an idea it will be possible to inherit the subsidizing of the investment fund, but, unconditionally, for the sake of this, the idea will need to be worked out.

Discovered seed code DataHub: LinkedIn metadata tracing and discovery program

 A quick search for the necessary submissions is needed for the sake of any company, which befits a large abundance of submitted for acceptance conclusions for the database. This does not exclusively affect the productivity of the users provided (including analysts, automotive training designers, data processing professionals, and data engineers), but it also has a direct impact on the final products that depend on a quality automotive teaching (ML) pipeline. In addition, the installation for the introduction, that is, creating platforms for automotive teaching in a relaxed manner, activates the question: what is your recipe for internal display of functions, models, indicators, kits provided, etc.


In this post, we will tell you that we have placed an aggregate of the provided DataHub in our platform of searching and showing metadata near the discovered license, activating from the main days of the WhereHows plan. LinkedIn is holding the personal version of DataHub on its own from the open source version. We'll start by explaining why we need two separate areas of development, then discuss the main tricks for using WhereHows open source and compare our internal (production) version of DataHub with the version on GitHub. We will also share details about our freshly baked automated wrap-up for sending and retrieving open source updates to back up both repositories. Finally, we will provide instructions on how to get started with the open source DataHub and briefly discuss its architecture.


WhereHows is DataHub today!

Setting Metadata LinkedIn has previously recommended DataHub (successor to WhereHows), LinkedIn's metadata tracing and discovery platform, and has shared projects since its inception. Shortly after this announcement, we released an alpha version of DataHub and shared it with the community. Since then, we have been continuously recording our contributions to the repository and working with interested users to add features that are mostly demanded and solve problems. Today we are happy to announce the official release of DataHub on GitHub.

Tricks with detected start code

WhereHows, a standalone LinkedIn gateway for finding renders and their origins, was showing up as a moral project; the installation of metadata discovered by his primeval programs in 2016. Since then, the team has consistently held two different code bases - one for open source and one for internal LinkedIn use, as not all of the product features explored for LinkedIn use cases were applicable to a wider audience out of the box. In addition, WhereHows has some internal servitude (infrastructure, libraries, etc.), the primitive code of which is not open. In subsequent ages, WhereHows has gone through many iterations and development cycles, making it a big problem to keep the two codebases in sync. Installing metadata during many flights tried to utilize various approaches in order to try to keep internal development and development with the discovered initial code.

First Attempt: "Open Source First"

Previously, we were guided by a development modification "first a little open primal code", where general development happens in the repository with the discovered source code, and the changes are recorded for internal deployment. The problem with this alignment is that the code is always initially pushed to GitHub, before it is completely controlled internally. Until changes are made from the recovered source code repository and a newly minted soulful deployment is executed, we will not show any production issues. if a bad deployment and it was not easy to assign the culprit, the causality of the change was recorded in batches.

In addition, this model lowered the performance of the setup around the development of freshly baked features, which required rapid iterations, since it forced all changes to be first pushed into the open source repository and then digested into the moral repository. In order to shorten the processing time, the necessary adjustments or changes may have been worked out initially in the internal repository, but this became a giant problem, sometimes the skill was thought of before merging these changes inside out into the open source repository, the causality of the two repositories ended out of synchronization.

This model is much more elementary to implement for the sake of corporate platforms, libraries, that is, infrastructure projects than for full-featured custom web applications. In addition, this model is very suitable for projects that are tied with an open initial verse from the first day, but WhereHows was formed as a completely soulful web application. Was carried on positively

Second try: "Internal first"
As a second try, we moved to an “internal first” development model, in which most of the development happens in-house and changes are made to open source on a regular basis. While this model is best suited for our use case, it has inherent problems. Directly submitting all the differences to an open source repository and then trying to resolve merge conflicts later is an option, but it is time-consuming. In most cases, developers try not to do this every time they check their code. As a result, this will be done much less frequently, in batches, and thus makes it difficult to later resolve merge conflicts.

The third time everything worked out!

The two failed attempts mentioned above have left the WhereHows GitHub repository outdated for a long time. The team continued to improve the product's features and architecture, so the internal version of WhereHows for LinkedIn became more and more advanced than the open source version. It even had a new name - DataHub. Based on previous failed attempts, the team decided to develop a scalable long-term solution.

For any new open source project, LinkedIn's open source development team advises and maintains a development model in which the project modules are completely open source. Versioned artifacts are deployed to a public repository and then returned to an internal LinkedIn artifact using an external library request (ELR). Following this development model is not only good for those using open source, but also leads to a more modular, extensible, and pluggable architecture.

However, it will take a significant amount of time for a mature back-end application like the DataHub to reach this state. It also eliminates the possibility of an open source implementation fully working before all internal dependencies are completely abstracted. Therefore, we have developed tools that help us contribute to open source faster and much less painful. This solution benefits both the metadata team (DataHub developer) and the open source community. The following sections will discuss this new approach.

Open source publishing automation

The latest approach by the metadata group to the open source DataHub is to develop a tool that automatically synchronizes the internal codebase and the open source repository. High-level features of this toolkit include:

Synchronizing LinkedIn code with / from open source, similar to rsync.

License header generation similar to Apache Rat.

Automatically generate open source commit logs from internal commit logs.

Prevent internal changes breaking open source builds by testing dependencies.


In the following subsections, the aforementioned functions, which have interesting problems, will be discussed in detail.