Modification of the adaptive study of carbohydrates of the unnatural pancreatic shackle AIAPS

 The fake pancreas is a system for the automated delivery of insulin to a person suffering from insulin-dependent diabetes, connecting a glucose prediction, an insulin pump, and a judgment center (such as using AIAPS).


AIAPS such use is the control body of the IPV, the setting of which is to regulate the glucose of menstruation and keep it within the target range. To achieve a more monolithic design, blood glucose monitoring is based using straight-line logic and neural networks.

When developing an application, installing a plan carries out an independent reprimand on the safety of using the system.

The function of AIAPS, which the presented chapter is devoted to, seems to be an adaptive (it is again called dynamic) modification of the study of carbohydrates.

Start

We activate with the fact that for nine months a modification of the effect of carbohydrates was introduced back into the AIAPS, depending on the slow decay of various types of carbohydrates, namely:

1 character - carbohydrates with a duration of 60 hits and a peak of 25 hits
These include Coca-Cola, lollipops, sand, etc.
2 character - carbohydrates with a duration of 120 will hit and a peak of 40 will hit
For example, this can become a sweet, that is, snow-white bread.
3 character - carbohydrates with a duration of 180 will hit and a peak of 60 will hit
This is a complex food, say buckwheat with meat.
4 character - carbohydrates with a duration of 240 will hit and a peak of 70 will hit
Fertile and protein-rich food in large quantities.

We tested this model for 6 months, non-stop observing duration, but shortly after we encountered the following difficulties (in addition to the fact that establishing the duration makes it difficult to track carbohydrates).

Firstly, for example, my complex break with buckwheat and meat covers a large 100 carbs (any regiment for you), we noticed that in this case it would be wrong to direct the duration of carbs to 180 minutes. And after the expiration of data 180, the progress of glucose will continue and unconditionally - carbohydrates appear to be the trigger of this increase.

Secondly, after the way of eating, by means of a certain time of dextrose, menstruation initiates a distinct decrease (although after the modification there is again an abundance of carbohydrates for the board), which indicates that carbohydrates are now absorbed by the body

We also noted that after the shamming process, carbohydrate intake is tied at different time intervals, which can lead to a longer period of studying carbohydrates compared to the model.

Thirdly, we figured against the intricacy of carbohydrates, since the founding of the carbohydrate route, their peak would be flawless for 25-30 minutes. This point does not work, the complexity is scrawled at the top, and this is just our assumption.

In addition, we saw that some shamovka manners hold not one, but rather a lot of peaks or, instead of straight decay, they can have violent decomposition.

This all led to a tendency to refine the previously laid model, which only delivered excellent results.

Newly made resolution

We decided to stretch the evidence and see a realistic modification of carbohydrate putrefaction.
To achieve the desired result, we implemented a modification of the adaptive study of carbohydrates or the adaptive carbohydrate model (MAU).

Next, you can present an example of the first rendering of the display of the adaptation division.


Scene 1: Screenshot of a program demonstrating carbohydrate breakdown

Take a look at the difference between a regular modification (purple, download 1 time per minute) and digested carbohydrates (green, download 1 at a time in 5 minutes). Modifications vary, but even this is an excellent example of how carbohydrates ultimately act.

In the same vein, we concluded that carbohydrates last longer than we imagined.

This will help us to draw such a modification of carbohydrates, which is present in reality, and certainly calculate the coefficients.

How the calculations are done

The first moment is glucose monitoring for the base of flowing rates, insulin and carbohydrates on board,
The second point is the acquisition of glucose values.

Next, we calculate the difference between the monitoring and the real values ​​and earn the UIA delta.

After extracting the UIA delta, we add it:

To carbohydrates
To their lack
To activity.


After all this, as the values ​​are earned and the prescription is made, it is possible to begin to regulate glucose by unanimously acquired data.

Customization by unanimously acquired provided will be performed in the following options.

The impact of carbohydrates started, eating insulin healed to break down carbohydrates is allowed.
The effect of carbohydrates did not start on time, in which case their effect can be extended over 1.5 hours.
The impact began, but not in the expected volume, in which case the surviving hormone will be fed unanimously to the drowning number of carbohydrates in fact.


What the devil is development oriented for

The modification of adaptive carbohydrates is an attempt to empty the user through such grueling tasks as:

specific carbohydrate count
protein prices
counting snacks


And the order is more harmless to inject insulin.

Industrial details

The station and application of the artificial pancreas shackle AIAPS today is likely for devices with Android OS (versions greater than 6.0) and will soon become likely for the IOS platform. The virtualization of carbohydrate putrefaction is being tested and implemented by a separate program for Windows. The design has been tested by users for Samsung, Xiaomi devices, however, we do not experience any obstacles to work on other devices.

We will shortly acquire the main results of research on the scale of modification of adaptive study of carbohydrates and supplement AIAPS with freshly baked capabilities.

How to spot a cheat through Data Science?

 You may have sensed about analysts, machine learning professionals, and artificial intelligence, but have you heard of those who are unnecessarily overpaid? Meet the data rogue! These tricksters, attracted to profitable jobs, have given them a bad reputation for full-fledged professionals after finishing data. We understand the material as if to conclude such people for perfect water.



Rogues provided round and round

The rogues provided so much well can hide well-known, but no, you can exist one of them, even without realizing it. Your organization has probably covered these tricksters for years, but there is excellent news: they are easy to identify if you know what to look for.
The first warning sign is the misunderstanding that analytics and statistics are infinitely different disciplines. I will explain this later.

Various excerpts

Statisticians have learned to tinker with answers about what goes beyond their data, specialists have learned to study the table of contents of a dataset. Or, experts carry out answers about what is contained in their data, and statisticians carry out answers about what is not provided. Experts help to ask excellent questions (hypothesize), and statisticians help to make excellent conclusions (test hypotheses).

Consume and extraordinary joint roles, sometimes a person tries to clean up for two chairs ... Why not? A fundamental principle of data science: if you are faced with uncertainty, it is impossible to utilize one and this particular point provided for hypotheses and testing. Sometimes materials are limited, confusion forces one to select between statistics or analytics. Clarification here.

Without statistics, you will be stuck and you will not be able to understand whether the reviewer only abstains from the formulated judgment, and without analysis you move blindly, having insufficient chances to tame the unknown. This is a difficult choice.

The rogue's exit from this scrape is to step over it, and then pretend to be stunned that what is unexpectedly discovered. The regularity of approbation of statistical hypotheses is united to the question: do the materials overwhelm us enough to correct our opinion. How can we be surprised by the data, if we have already seen it?

Always, sometimes rogues are looking for a pattern, they get inspired, then they control exactly these materials for the sake of the same pattern, in order to publish a bill with a legitimate p-value, that is, two, nearly as much as they could by theory. This is how they lie to you (and perhaps to themselves too). This p-value is irrelevant if you don't hold your hypothesis before you overlooked your data. Dodgers simulate the effects of specialists and statisticians without understanding the reasons. Ultimately, the whole area of ​​data science develops a bad reputation.

Real statisticians constantly exercise their answers

Due to the approximately fantastic reputation of the post-statisticians for demanding reasoning, the abundance of fake information in Data Science is unprecedentedly high. Free to arrange not to get caught, only if the decently unsuspecting victim thinks that all the skill is in equations and data. The dataset provided is such a dataset, right? Positions the property as you use it.

Fortunately, you only need one clue to stop the charlatans: they "discover America with a return number." First by discovering the phenomena that they already know are in the data.

Unlike charlatans, excellent specialists have no prejudice and understand that inspiring thoughts can have seemingly varied explanations. At the same time, excellent statisticians meticulously establish their conclusions before they draw them.

Professionals are relieved of responsibility ... happily they stay within the scope of their data. If they are tempted to declare what they have not seen, this is a completely different job. They should “take off the shoes” of the specialist and “change” into the shoes of the statistician. Finally, no matter what the official name of the position may be, there is no rule that you cannot unanimously study two professions if you want to. Do not confuse them exclusively.

Just because you have a good understanding of statistics does not mean that you have a good understanding of analytics, and vice versa. If someone is trying to tell you otherwise, you should be wary. If this person informs you that it is allowed to tinker with a statistical recipe on the data that you have already studied, this is a reason to strain twice.

Unusual clarifications

Noticing after the rogues provided in the wild, you will notice that they love to come up with mind-blowing stories in order to "explain" the contemplated data. The more academic the better. It doesn't matter that these stories are fitted with a return number.

Sometimes rogues do this - let me not hoard for the word - they lie. No abundance of equations, that is to say, attractive definitions, does not compensate for the fact that they offered fresh evidence of their versions. Don't be surprised at how extraordinary their explanations are.

There is the same thing as demonstrating your "psychic" abilities, initially looking for a card in your hands, and then predicting, but no, hold ... what you hold. This is a retrospective bias, and the data scientist profession is stuffed with data after the throat.



Experts say: "You only went with the Queen of Diamonds." Statisticians say: “I wrote my hypotheses on this piece of paper as much as we started. Spread it out and play, look at some materials and see if I am innocent. " The rogues say: "I knew, but there is no collectible to be born this tambourine queen, so what ..."

The separation of the granted is such a swift resolution of the problem in which everyone is begging.


Sometimes there are a lot of those provided otherwise, it is required to select between statistics and analytics, but sometimes provided with too much, to consume an excellent probability, in addition to forgery, to resort to analytics and statistics. You have to consume perfect defense through rogues - this is the separation provided and, in my opinion, such the most powerful thought in Data Science.

In order to protect yourself through charlatans, all you need to work out is to make sure you don't keep some test materials out of the reach of their prying eyes, and then touch everything else as if to the analyst. Sometimes you come across a theory that you are in danger of accepting, use it to frame the situation, and then uncover your unspoken test data to verify that the theory is not nonsense. It's so easy!


Make sure no one is allowed to view test materials during the research phase. Stick to experimental data to achieve the desired result. Test materials cannot be used for analysis purposes.

This is a big improvement relatively without a return, for what purpose people are addicted in the days of "small data", sometimes you need to explain where you understand what you know, in order to finally certify people, but you don’t know something positively.

Using the same rules for ML / AI

Some charlatans posing as ML / AI experts are also free to spot. You will catch them in the same way that you would have caught any other bad engineer: the "solutions" they try to build never stop failing. An early warning sign is the lack of experiment with the service with common industry syllables and programming libraries.

But what about the people organizing systems that seem workable? How do you know if something suspicious is being done? The rule is adapting! The trickster is a sinister character, some show you how well the modification worked ... for exactly this data that they used to create the model.

If you've created an infinitely complex automotive learning setup, how do you know how good it is? You won't know, you won't happily show that it is functioning with freshly baked data that you haven't seen before.

Sometimes you created materials before forecasting - this is hardly a prediction.


Sometimes you have enough granted for the sake of separation, it doesn't make sense for you to base yourself on the beauty of your formulas to excuse the calculation (an old popular habit that I see everywhere, not exclusively in science). You are given the opportunity to say, “I know this works, so I can seize a dataset that I haven’t seen before, and rigorously predict what will happen there… and I’ll be right. First and again. "

Adjusting your model / theory to freshly provided ones is the best box for the sake of trust.


I hate data rogues. I don't care if your view is based on different chips. I am not amazed at the grace of explanation. Imagine that your concept / modification works (and continues to work) for a few freshly baked data that you have never seen before. It is this to consume the full-fledged adjustment of the persistence of your mind.

Welcome to the Professionals at Data Science Square

If you want to be thoroughly approached by everyone who understands this humor, stop hiding after extraordinary equations in order to keep your own prejudices. Show what you have. Probably so that those who “understand” will view your theory / model as something more than just inspiring poetry, have the ambiguity to organize a majestic notion of how well it performs on a completely fresh set of data ... near witnesses!

Welcome to chapters

Turn away to really take every "idea" about data, happily they are not validated against freshly baked data. To put efforts into scrap? Stick to analytics, but don't count on these ideas - they are unreliable and have not been tested for reliability. In addition, sometimes the company consumes materials in abundance, there is no flaw in order to work out the separation of the base in science and maintain it at the level of infrastructure, controlling the path to the test provided for the sake of statistics. This is a great way to stop trying to fool you!

If you want to spot a lot of charlatans who are up to something bad, here's an excellent Twitter thread.



Sometimes the provided is not enough for separation, the sole deceiver tries to be strictly guided by inspiration, discovering America retrospectively, accurately rediscovering the phenomena that they already know to consume in the data, and calling the overwhelming statistically significant. This distinguishes them from the non-skimpy analyst who is faced with inspiration and the meticulous statistician who offers confirmation about forecasting.

Sometimes there are many provided, get the habit of splitting data, that way you will be able to have the best of both worlds! Unconditionally construct the specialist and the statistics yourself on separate subsets of the initial data jumble.

The experts call on you for inspiration and open-mindedness.
The statisticians urge you to test rigorously.
Dodgers call you a twisted retrospective, some pretending to be a sign of analytics statistics.



Perhaps, after reading the article, you will have a tendency "am I not a deceiver"? This is normal. There are two ways to drive out this trend: first, look back, see what you have done, whether your plowing with data has brought utilitarian benefits. And secondly, it is possible to work again on your qualifications (which will not be absolutely useless quickly), all the more so we give our students the utilitarian skills and knowledge that allow them to become full-fledged data scientists.

Put the programmer in the stream. Don't interfere.

 A reference is needed for every child. What's more, the approval for the finishing of the individual data. From each of the parents. Let everyone fill out the questionnaire. The account is about how many boys and girls. Besides, by age. And in the areas of registration. Well, for schools. Distribute there, please, ordinary schools, lyceums and gymnasiums. No, advice cannot be issued. It's only 4 hours. Once a week. Yes, all educators should come. Of course, you also need to act in kindergartens. Any of you. Three times a week. And we don't like your costumes, we need less paint - what if parrots?


So, why are there no freshly baked productions? Where are the victories in competitions? What do you mean two months chasing papers collecting? What kind of creativity again? And why don't you hesitate on him? Which secretary should you hire again? What do you mean "I'm leaving"? Do you seriously think that you can manage besides us? Well, good luck.

Approximately this is how one infinitely benign manager of one infinitely excellent dance group described the activity “under the wing” of a government institution, sometimes explaining why he left “from under the wing”.

The incident wicked into the soul, because I was just conducting an experiment (next time) to rid other creative people - programmers - from non-core, but "such an important, necessary and obligatory work" - being in time.

What happens if?

I have deceived this experience more than once, in various brews - both on projects, and on development, and on factory programmers, and on rendering services after revision. Believe it or not, the result is always one each of the two.

If programmers stop worrying about deadlines, and only set tasks, one after another, without being distracted for any devil, then productivity will double. Accordingly, if you reconnect the schedule of "being in time" back, then the coefficient is smoothly the same - twice, only this time productivity is divided into it.

Well, and most importantly: the programmer still misses the same on time, although kill him. (a) what if it hits, because only sometimes, by accident. Or the cost of reducing productivity.

Everything is infinitely simple here. The truth that the programmer doesn’t know at all how much time it will take to solve the problem is not to be borne out - for this topic a lot of notes and books are consumed. If you functioned as a programmer, then proof is not required. There are, of course, throwing out - the same type, repetitive tasks - but these are actually exceptions.

In the main heap, our plowing encompasses such non-stop changing unknown, long-term flashbacks of aged tasks, surprises through subcontractors and updates of dependencies, design errors, etc.

How to plan to create this kind of work? Consciously, there are four of them - fantasies, reserve, size and flow.

Planning technologies

Fantasy is the application of mass production planning techniques to the work of programmers. For example Lean or MRP. This alignment is always used "classical managers", their exclusively separate caste - "managers". It is necessary to simply release the planned labor costs from the programmer, ignoring all his cries like “damn it, of course I don’t even know what I’ll face there,” and outline a specious sausage for the Gantt chart. And redraw every day.

Potential - these are approaches like the concept of constraints, sometimes an equestrian share is simply added to the planned labor costs, for each case. The six that came out and outlined something like a sausage for the Gantt chart. Redrawn less often, but almost always.

Size is when it’s not a deadline for completing tasks, but productivity. For example, this kind of approach is used in Scrum - anticipating the indicative haste of the installation service (in story points), it is possible to project the size of the service after the sprint (in these SPs). Accordingly, all sprint problems have one, each of the two same deadlines.

Chorus is when to consume exclusively speed. Problems are created in turn, the programmer sits down and solves one after another. The deadlines are not known, but they can be calculated - anticipating the haste and the number of the problem in the queue. The main thing is not to puzzle the programmer himself with calculating the term.

Pros and cons

There is no point in discussing the fabulous entrance - it does not work. Plus, it also creates constant, virgin stress and idiotic service after rescheduling. It is possible to live if it is not the programmer who ignites the rescheduling, but someone else, but this rarely happens. Naturally, the programmer is simply repeating every day with questions like “tell me the deadline”, “when will you finish this task? "That is," the deadlines have always passed, are you going to act in other words, no? In a natural, harmonious way, the programmer arrives at the reserves of time, he does not even decently know about any popular methods.

Stocks of slowness are saved through hassle, but they reduce productivity, due to the influence of Parkinson's law - plowing borrows all the time allotted for it. In some agreements, this entrance suits everyone - for example, for the sake of industrial programmers. True, before that time, the programmer will not happily quit - then, in most cases, he understands that his haste to serve is beyond market conditions.

The deadlines around this are met, that is, the stocks of time can combine thousands of percent of real labor costs. If a business or a process is built in such a way that the key factor is actually guessing in due time, then the method of reserving time is infinitely good.

Large methods, such as Scrum, impeccably double productivity by reducing the impact of Parkinson's Law and focusing on more or less realistic productivity, not fantasies and time reserves. However, a sprint is a deadline before the drum, that's why Parkinson's canon continues to operate, in its turn, saving time and trying to manipulate estimates (story points). People are consumed by people - both programmers and managers. Programmers want to be excellent employees. And managers are so addicted to calculating as excellent workers only those who "are on time", that although a few are on their heads. It will be elementary to call it all differently - like "all tasks of the backlog are correspondingly executed around the sprint, and there is nothing to facilitate here." Also, any KPI for this skill will come up, because the imagination is not rich.

There are no problems in the data stream, because their root cause is absent - a decrease in the programmer's service and attempts, one way or another, to set the deadlines for the implementation of work. The chorus protects the essence of the programmer's service - creativity. I would like, of course, to say that the choir is pure creativity, but this does not happen. However, the accuracy is noticeably higher. And productivity doubles again when compared to Scrum.

What's interesting: the defense of the programmer, or any performer of the work, is omitted in any of the methods mentioned. But with regard to programmers, protection is constantly forgotten.

What's in the database of any order

For example, Lean, oddly enough, is also based on the idea of ​​a flow, that is, invented for a pipeline. The idea is to design the service as correctly and harmoniously as possible. So that any performer in the chain, on the one hand, constantly had something to do with something, and on the other, so that there was no queue before him. Exceptionally minimal pressing work supply. For a programmer, this is one task. Remember the manager, a Lean lover, to convey this idea - he will not even understand what it is about, because he overlooked the disclosure about the protection of performers, sometimes he recited a note on Wikipedia about lean manufacturing.

Theoretically restrictions, what about the reserves, the defense of the executing link is a universally comparable postulate. Where programmers sit, they are almost always a bottleneck. What does the CBT say about the bottleneck? Specifically, it must be protected. Tidy up all non-core workloads (including the reduction in personal work), interfere with downtime, do not fill up kumekals with idiotic questions and meetings. Organize a choir service celebration of the speed with which the bottleneck functions. Well, managers-experts in TOC, admit it - a long time ago you thought about how to protect programmers through every foolishness?

And Scrum is all about flow. There, the principle "do not stir people to work" is elevated to an absolute, and is formulated in the claim to a large autonomy of the installation during the sprint. Later - please, come, see what happened, prefer the puzzles for the incoming race, poke around in the shower. During the sprint, don't breathe nearby. Who works in Scrum - what do you say? Nobody interests you during the sprint, huh?



Don't give a damn, flow is needed everywhere. So that the programmer sat down and programmed elementary. I did not calculate deadlines, did not fantasize about labor costs, did not mix priorities after often, did not walk for a meeting, did not participate in delusional correspondence and chats.

However, not much to spit, there is no flow anywhere. Whatever entrance would not be used, a manager, or a client, or some moron will find a reason to snatch the programmer from the harmonious creative flow because of some infinitely magnificent nonsense.

Into the stream constantly

Linux kernel 5.9 release.

 It's only been two months since the Linux kernel 5. 8 has been called the "greatest" kernel release, "and Torvalds has already published the newly invented release, this time version 5.


According to the reliable information of correspondent Michael Larabel, the kernel code contains 20.49 million lines, 3.58 million lines of comments and 3.72 million blank lines. The abundance of files with source code has reached 59 thousand. But ok, these are all numerical characteristics. What's new in the core? Lay out and see.

Hardware

For the RISC-V architecture, the creators added the help of kcov -, a debugfs interface for the sake of parsing the sputtering of the kernel verse, a sign of the well-organized kmemleak memory leak detection, stack protection, notices of transition and tickess operations.
For ARM and ARM64, the schedutil processor frequency throttling mechanism is enabled by default. To change the frequency of the chip, materials from the task scheduler are used, so that schedutil naturally rotates to the cpufreq drivers. In general, the frequencies are adjusted to the current load.
For Intel cards, they added support for chips based on the Rocket Lake microarchitecture and the initial support for Intel Xe DG1 breakout cards.
The amdgpu driver has added the initial GPU support AMD Navi 21 (Navy Flounder) and Navi 22 (Sienna Cichlid). In addition, today consume the help of UVD / VCE forcing video encoding and decoding engines for GPU Southern Islands (Radeon HD 7000). At this point, the AMD GPU driver is the most portly driver in the kernel with 2.71 million lines of code.
In the Nouveau driver, the creators added help for frame-by-frame testing of integrity around CRC support.
Added help from a large number of boards, gadgets and platforms starting Pine64 PinePhone v1. 2, Lenovo IdeaPad Duet 10.1, ASUS Google Nexus 7, Acer Iconia Tab A500, Qualcomm Snapdragon SDM630 (used in Sony Xperia 10, 10 Plus, XA2, XA2 Plus and XA2 Ultra), Jetson Xavier NX, Amlogic WeTek Core2, Aspeed EthanolX, five freshly baked shroud based on NXP i. MX6, MikroTik RouterBoard 3011, Xiaomi Libra, Microsoft Lumia 950, Sony Xperia Z5, MStar, Microchip Sparx5, Intel Keem Bay, Amazon Alpine v3, Renesas RZ / G2H.


Virtualization and harmlessness

For architectures like xtensa and csky, the help of limiting entire calls through the seccomp subsystems is attached.
When building the Clang kernel, there was a possibility of the option (CONFIG_INIT_STACK_ALL_ZERO) of self-initialization with a fresh value of all variables that are secured on the stack (when building, -ftrivial-auto-var-init = zero is specified).
The AP_CHECKPOINT_RESTORE capability-flag appeared, some allows passing the path to the matinees functions and restoring the state of processes without transferring additional privileges.
In GCC 11, consume all the capabilities required by the Kernel Concurrency Sanitizer (KCSAN) tool, designed to dynamically disclose race conditions within the kernel. Today this toolkit can be utilized with kernels that are concentrated in the GCC.
Eliminated programs to support 32-bit guest systems operating in paravirtualization mode near the Xen hypervisor board. Users should go for 64-bit kernels.

Paramnesia and whole services

The creators have tightened their protection through the use of GPL interlayers to link proprietary drivers with kernel components exported solely for the sake of modules under the GPL license. We have written in more detail about this here.
There was the help of the kcompactd mechanism for preemptive packing of memory pages in low-priority mode, some allows to increase the number of huge memory pages that are intelligible to the kernel. It is thanks to this that it turned out in 70-80 to simultaneously reduce the delays in the allocation of large memory pages after comparing with the packaging mechanism, some were utilized earlier.
Added help for compressing kernel manners using the Zstandard (zstd) method.
For x86 systems, the help of the FSGSBASE processor unit is attached, which makes it possible to declare and change the contents of the FS / GS registers from the user's place.
The allow_writes meteorological parameter appeared, some allows you to prohibit changes to the MSR-registers of the processor from the user space and to localize the path to the contents of the provided registers with read operations. True, there is happily no prohibition of writing after the default, but in the near future the creators will switch the path to "read-only" mode after the default.
The io_uring asynchronous I / O interface has added the perfect help for asynchronous buffered read actions that do not require kernel threads.
Modification of energy consumption in the core is provided today by remote devices, and not so much CPU.
The gamma algorithm for prioritizing threads within the kernel has been updated. The reborn flavors allow rational consistency across virtually all kernel subsystems when prioritizing for realistic timing problems.
The sysctl sched_uclamp_util_min_rt_default has been added to control the parameters of CPU frequency generation for realistic slow tasks.
Added fresh FAN_REPORT_NAME and FAN_REPORT_DIR_FID flags to the fanotify engine. They allow you to broadcast notifications about the parent name and the miraculous FID in the event of incidents of creating, pulling or moving directory ingredients not combined with object directories.
An additional significant upgrade is the sale of a freshly baked slab memory controller. As a result, it succeeded in shortening the memory used for the slab for 30-45%, optimizing the overall memory use by the kernel and reducing memory fragmentation.


Plowing with files and attritor system

A rescue mount function for Btrfs file fixing emerged. It makes it possible to standardize the path to all other settings for the sake of recovery. Self-optimization of performance was deceived, the possibility of using alternative images of audit sums, wonderful through CRC32c, was added.
Added the possibility of inline encryption (Inline Encryption) in the ext4 and F2FS file systems. This function helps to activate the encryption mechanisms integrated into the drive controller.
XFS adds inode flush in fully asynchronous mode. Moves around the execution of the memory clearing action are not blocked. Finally, they decided on a quota topic that did not allow respectfully following warnings about exceeding the compliant limit and restrictions for the number of inodes.
Ext4 includes a look-ahead load of block allocation "bitmaps". Self-optimization together with the recognition limitation allows to shorten the mounting time of infinitely large partitions.
The SCSI subsystem also features inline encryption based on integrated hardware encryption drugs.
For md / raid5 added meteorological parameter / sys / block / md1 / md / stripe_size for the sake of the STRIPE block size option.

Net

Netfilter introduces pre-routing rejection of packets.
Added the ability to audit configuration change events in nftables.
For nftables, the netlink API also adds support for anonymous chains, which are named dynamically by the kernel.
BPF now supports iterators for traversing, filtering, and modifying the elements of associative arrays (maps) without copying data into user space.
The new type of BPF programs BPF_PROG_TYPE_SK_LOOKUP is launched when the kernel searches for a socket for an incoming connection.
Added support for PRP (Parallel Redundancy Protocol). It allows Ethernet-based failover to be implemented transparently to applications in the event of a failure of any network component.
There are new possibilities for MPTCP (MultiPath TCP). First of all, this is an extension of the TCP protocol for organizing the operation of a TCP connection with the delivery of packets in parallel along several routes through different network interfaces that are tied to different IP addresses.


According to statistics, the new version contains 16074 fixes from 2011 developers. The total size of the patch is 62 MB. Changed 14,548 files, added 782,155 lines of code, removed 314,792 lines. Approximately 45% of the changes are related to drivers, 15% to code updates for hardware architectures, 13% to networking, 3% to file systems, and another 3% to internal kernel subsystems.

What is good, what is bad: will the unnatural mind have responsibility

 Is the unnatural mind capable of learning the moral values ​​of a human society? Is he able to perceive conclusions in situations, sometimes he must always think about the pros and cons? Can you promote a sense of right and wrong? In short, will he have a conscience?


These questions may seem irrelevant given that modern AI setups are ready to do quite a few tasks. But along the border of the development of science, his abilities are always greatly expanded. We already see that the methods of bogus reason are adapting in areas where the boundaries of "good" and "bad" conclusions are difficult to define, - for example, in criminal justice, that is, the selection of resumes.

In the future, we expect AI to take care of the elderly, train our kids, and perform many other tasks that ask for human empathy and moral understanding. That is why the question of AI awareness and conscientiousness is always more acute.

With this request, I stepped up to look for a book (or books), which would explain how people developed a conscience. And I would give a hint what information about the uncle's brain can help in creating a conscientious unnatural intelligence.

A well-wisher recommended that I study the book Conscience: The Roots of Moral Intuition by Patricia Churchland, a neuroscientist, sage and honorary doctorate from the California Institute of San Diego. Doctor Churchland's dissertation and my own game with her gave me a good idea of ​​the scope and limitations of brain science. "Conscience" shows how far we have come to understand the relationship between physiology and the service of the brain and the moral qualities of people. Yes, the dissertation sheds a source on how some kind of end still shines for us to go through in order to seriously understand that people acquire moral decisions.

The dissertation is scribbled in an easily accessible tongue and will appeal to those who are interested in exploring the biological basis of consciousness and reflect on the "humanity" of artificial intelligence.

Further, there is an infinite extract of what "Conscience" tells about the development of moral selection in the human brain. Since the AI ​​modification has actually been ripped off from him, the great knowledge of conscience will help us understand what an unnatural mind needs to study moral norms.


Education system
“Conscience is the uncle's individual opinion about what is fair or wrong. Usually, however, sometimes this judgment reproduces the view of the group, to which person belongs to himself, ”- scribbles the doctor Churchland in the book.


But how did people develop the capacity to understand what is right and what is wrong? In order to answer this question, the doctor Churchland holds us back in time, sometimes our main warm-blooded ancestors arose.

Birds and mammals are endotherms: their bodies can store heat. While reptiles, fish and insects, some cold-blooded organisms, the body adapts to the temperature of the surrounding environment.

The tremendous superiority of endotherm animals is the ability to collect shamovka at night and survive in a more unresponsive climate. But such creatures need much more shamovka for survival. This led to a series of secondary steps in the brains of warm-blooded mammals that were cooked up more intelligently. The predominantly notable modification is the formation of the rind of the host brain.

Kindling the leading brain is possible to integrate sensory cues and define speculative concepts about actions and things related to survival and procreation. She studies, summarizes freshly baked knowledge, remembers and continues her studies.

Kindling allows mammals to exist more resilient and not heart-felt to changes in weather and landscape. It is different from insects and fish, which are endlessly conditioned by the stable arrangements of the surrounding environment.

But, secondarily, the ability to learn is made up for by the fact that mammals are aroused defenseless and vulnerable. Unlike snakes, turtles and insects, which after hatching are ready to chase and function fully, mammals need a certain period of time to develop their survival skills.

Besides, their well-wisher depends on a friend for their survival.


Development of social behavior 

In the brains of all creatures of life consume a construct of retribution and punishment, which ensures that they are always exercising for survival and gene transfer. The Mammal Center has adapted this function somewhat in order to adapt to life in society.

“Throughout the evolutionary passage, the sensations of pleasure and pain to help survive have been reoriented to promote affiliate behavior,” Churchland scribbles. "Dignity has spread to a kindred but freshly baked sphere - attachment to one's neighbor."

The fundamental root cause of this change is the freezing of the descendants. Evolution required modifications in the mammalian brain in order to establish custody of children in the first space. Mothers, and in some variants both parents, tend to be born for everything in order to protect and nourish their offspring. Even for their sake, there is no benefit in this.


In the book "Conscience" the compiler outlines experiments to determine the biochemical reactions of the brain of various mammals, which reward social behavior, including custody of offspring.

“The social life of mammals is remarkably different through the lives of other social animals that lack kindling of the brain, such as bees, termites and fish,” scribbles Churchland. - The center of mammals is more flexible, less tied to reflexes, more susceptible to changes in the surrounding environment. He is prone to both long-term and short-term judgments. The community center of mammals allows them to navigate around the world in order to understand that they intend to work out what the rest of society is waiting for through you. "


Human social behavior 

The human brain has the largest and most complex cortex of all mammals. The center of homo sapiens is three times the size of the chimpanzee brain, with which we had a universal parent 5-8 million years ago.

An impressive brain naturally makes us smarter, but also asks for more noble energy costs. Causality are we frozen to cover this high-calorie bill?

“The ability to cook food on fire was most quickly driven by a permissive behavioral change that allowed the hominin's mind to form beyond the chimpanzee's brain and quite nimbly to prolong evolution,” Churchland scribbles.


Learning to reward the body's need for energy, hominins froze, ready to do more complex tasks as a result. For example, improving social action and building collective hierarchies.

It turns out that our behavior, starting to preserve moral norms and rules, is the result of a war after self-survival and the need to earn the necessary amount of calories.

The obvious need for energy "does not sound deep enough, of course, but, nevertheless, such a realistic reason," Churchland scribbles in his book Conscience.

Our genetic evolution has benefited the development of social behavior. Moral norms seemed like conclusions for our needs. And we, humans, as if any other living creatures, obey the laws of evolution, which Churchland describes as "a blind process, some, apart from any mission, walks with an already existing structure." The location of our brain is the result of numerous experiments and adjustments.

“Between the part of the brain that responds to self-care and the one that assimilates social norms, by the way, lies what we call conscience,” Churchland scribbles. “In this reason, our responsibility is such a“ construct ”of the brain, by means of which the instincts of self-preservation and others are reorganized into concrete actions by means of development, imitation and learning."

This is a very juicy and complex topic, and for all the benefits of brain science, some of the mysteries of human reason and behavior remain open.

“The fact that the ingenuous need for food has lost the majestic significance in the origin of human morality does not mean that solidity and conscientiousness must be depreciated. These virtues are first noble admiration and infinitely majestic to us, regardless of their humble origin. Actually they implement us as people, ”Churchland scribbles.


Artificial intelligence and conscience 

In the book "Conscience" Churchland discusses other topics, including the importance of teaching with reinforcement in the development of social action and the ability of the peel of the uncle's brain to engage in his own experience, philosophize over counterfactual judgments, exercise world modifications, make analogies, and more.

In fact, we use precisely this arrangement of reward that allowed our forefathers to survive, and rely on the uniqueness of our multilevel brain rind to make complex moral decisions.

“Moral norms appeared in the context of social tensions and were fixed for biological soil. Investigation of social practices is based on the system of solid and negative retaliation of the brain, and yes for its ability to solve problems, ”- scribbles Churchland.

After reading "Conscience", I had the seemingly invisible questions about the role of this moral compass in AI. Will the responsibility of the partially unnatural intellect become irreparable? If physiological constraints have pushed us to develop social norms and moral action, do we need similar conditions for AI? Do physiological experiment and sensory perception play a decisive role in the development of intelligence?

Fortunately, after reading Conscience, I had the chance to discuss these issues with Dr. Churchland in person.


Is physical experience required to develop conscience in artificial intelligence? 

As is obvious from the book of the doctor Churchland (and other studies of biological neural networks), physiological experiment and limitations play a great role in the development of reason and, accordingly, consciousness in humans and animals.

But today, sometimes we talk about artificial intelligence, we have a presence of software architectures, including artificial neural networks. AI in the topical figure is usually the chaste lines of code that are made for computers and servers and cultivate data. Will physiological experimentation and constraints be sufficient to develop a positively human AI that is gifted to appreciate and respect the moral norms of society?

“It's hard to see how flexible AI plowing can become, given that the anthropotomy of the apparatus is infinitely different from the anatomy of the brain,” healer Churchland said in our conversation. - In the case of biological systems, the resolving property is the structure of the reward, the structure of teaching with reinforcement. Feelings of solid and negative retribution are needed to gain insight into the surrounding environment. This may not be the case in the case of unnatural neural networks. We just don't know about it. "

The doctor said that we still don't know how the brain thinks.

“With this knowledge, we might not need to reproduce whatever the biological brain in artificial intelligence, whatever, in order to insist such behavior,” she added.

Churchland recalled that the original AI community rejected neural networks. But as a result, I realized them to be quite effective, sometimes neural connections presented their stock in the implementation of computational tasks. Despite the fact that progressive neural connections are fundamentally inferior to the capabilities of the human brain, sometime surprises can await us.

“We know that mammals with a developed host brain skin, retaliation and biological neural networks can engage and structure information without a huge amount of data,” she said. “For today, an unnatural neural line is possible to exist as excellent for identifying faces as it is in vain for classifying mammals. Mastery can exist elementary in numbers. "

Do we need to replicate the subtle physical differences of the brain in AI?

One of the conclusions I made after reading Consciousness was that people naturally agree with the social norms of society, but sometimes challenge them. Both the unique location of the human brain, and the genes that we inherited from our parents, and the experience we gain, allow us to engage in a thin setting of moral guidelines. People can revise the previously set moral norms and laws, and yes, invent new ones.

One of the most widely disseminated peculiarities of the unnatural reason is its reproducibility. Sometimes you organize an AI gamma algorithm, you can replicate countless amounts of it at once and deploy it to any number of devices. All of them will act identically after the final options of the neural networks. Today the question is being tied: if all AIs are equal, will they be static in their own social action and therefore the holiday of flexibility is resolved, which describes the dynamics of social progress of society?

“Until we have a perfect idea that the brain is functioning, this question will be difficult to answer,” Churchland said. - We know that in order to inherit an intricate account through a neural network, it does not really need to have such biological elements as mitochondria, ribosomes, proteins and membranes. How many again can it not be? We do not know…. Without data, I'm just the next person verbalizing an opinion. And I have no data, which would say: you need to imitate such and such schemes in the arrangement of teaching with reinforcement, in order to organize a more human neural network. "

Much still shines for us to learn about a human conscience. Even more about whether it is applied to the technologies of an unnatural mind and how exactly.

“We don't know exactly what the brain is doing; sometimes we learn to maintain balance in the headstand,” Churchland writes in Conscience magazine. "Once again, we know less about what the brain is doing, sometimes we are trained to find balance in a socially complex world."

YouTube will allow products from their videos to be sold steadily on the platform

 YouTube has thrown the test of a feature that allows users to purchase products straight from video hosting videos. Creators of such videos can now resort to YouTube software provisioning for labeling and monitoring their products.


It is assumed that the software will operate seamlessly with the tools of Google specialists.

As noted by the sources, the site will become a "catalog of goods", with which it will be possible to get acquainted when watching videos and immediately buy.

As confirmed on the platform itself, the experience is positively conducted, but with a small number of videos.

Ahead, if the calculation is successful, YouTube will be able to compete with Amazon and Alibaba. However, the mechanism of extraction by hosting came through the merchant is happily not disclosed.

Among other things, YouTube is testing integration with Canadian online and retail software developer Shopify. The test was thrown at the end of 2019. Video creators were recommended to display up to 12 products for sale under commercials.

Hanging Facebook threw the Shops service. Merchants can define a literal mark for products, distribute online catalogs, and exchange the formation of conditional storefronts. The Brethren stated that the options would be free and available to any organization. The launch of such a plan was interpreted by the need to support a tiny business during a pandemic. Consumer assistance is performed via Messenger, WhatsApp or Instagram personal information. Next, Facebook announced the launch of an electrical payment service on WhatsApp, happily in test mode. Forwarding banknotes or making payments will be free of charge for the sake of personal users.

Hynix unveils world's first DDR5 DRAM

 The Korean fraternity Hynix has unveiled the first-of-its-kind operational paramnesia of the DDR5 stereotype to the public, according to the company's official blog.



According to SK hynix, the freshly baked paramnesia guarantees the haste transmission provided at 4.8-5.6 Gbps for contact. That's 1.8 times the baseline for early-gen DDR4 memory. With all this, the culprit claims that the voltage on the bar has been reduced from 1.2 to 1.1 V, which, in turn, increases the energy efficiency of DDR5 modules. Yes, ECC - Error Correcting Code help was implemented. As stated, due to this function, the authority of the service of additions will increase by 20 at a time compared to the memory of the early generation. The elementary memory size of the board is announced for the 16 GB spirit level, the large one - 256 GB.

The freshly baked paramnesia was invented according to the JEDEC Solid State Technology Association stereotype specification, some posted on July 14, 2020. Unanimously with the old JEDEC announcement, the DDR5 classification maintains a double obvious channel than DDR4, because it consumes up to 6.4 Gbps in DDR5 versus the current 3.2 Gbps in DDR4. With all this, the start of the stereotype will be "smooth", in other words, the main levels, as planned by the organization and as shown by SK hynix, are faster in the warehouse only for 50% after comparing with DDR4, because they have a power output of 4.8 Gbit / s

Unanimously announced, the brethren are ready to start a large release of memory modules of the freshly baked standard. All preparatory milestones and tests, including testing by foreign manufacturers of the main processors, have been passed, and the company is stepping up its offensive workings and the resale of a freshly baked type of memory as soon as the equipment meets the specifications. In the study of freshly baked memory, the Intel brothers were passionately involved.
Intel's involvement is no coincidence. Hynix says that while the main consumer of new generation memory, in their opinion, will be data centers and the server segment as a whole. Intel still dominates this market, and in 2018 - it was then that the active stage of collaboration and testing of new memory began - was the undisputed leader in the processor segment.

Jonghoon Oh, executive vice president and chief marketing officer for Sk hynix stated:

SK hynix will focus on the fast growing premium server market, strengthening its position as the leading server DRAM company.


The main stage of entering the market of the new memory is designed for 2021 - it is then that the demand for DDR5 will begin to grow, and at the same time the equipment capable of working with the new memory will "fit" for sale. Synopsys, Renesas, Montage Technology and Rambus are currently working with SK hynix to build the ecosystem for DDR5.

By 2022, SK hynix predicts a 10% share of DDR5 memory, and 43% of the RAM market by 2024. True, it is not specified whether it means server memory, or the entire market, including desktops, laptops and other devices.

The company is confident that its development, and the DDR5 standard in general, will be extremely popular among specialists working with big data and machine learning, among high-speed cloud services and other consumers for whom the speed of data transfer inside the server itself is important.