Unrecognized or legacy configuration settings found
while running Yarn 2 or Yarn 3 it's possibly because you have a rogue environment variable YARN_XXXX in your environment
In my case, it was a YARN_WORKSPACES environment variable I accidentally created while using the Netlify BaaS that I was supposed to create with NETLIFY_YARN_WORKSPACES
Delete the rogue environment variable (in this case from Netlify's Build GUI but in other cases could be from your Dockerfile or CI/CD pipeline) and your problem will disappear
Basically yarn is a busybody and if it sees YARN_XXXXXX in your environment, it will complain if it doesn't recognise the environment variable and fail your build
I discovered this by modifying the build command to use yarn config -v to see the list of errors and it was an undocumented yarn error code
See https://yarnpkg.com/advanced/error-codes#yn0050---deprecated_cli_settings for another Netlify configuration error
In my case, I was passing parameters to a console program that contacts a service, both in the same solution... doing "multiple project startup" doesn't give me enough control
* You publish in debug mode to folder
* Startup the project after setting ASPNETCORE_ENVIRONMENT variable
(Do PowerShell for powers)
* Now startup your web project / console program / whatever is going to contact the service in Visual Studio
* Now back to OLD SCHOOL and ATTACH TO PROCESS
(I know, greatest window in the world)
So it's a little pain in the ass to get it to work especially if you debug a lot. Attach to process every single time. But the key is dotnet core comes with its own webserver (Kestrel). You don't need IIS anymore and you don't need IIS Express anymore.
I am sure there's some crazy way to get it working with IIS / Visual Studio integration, remote debugging, etc., etc., but this way works and doesn't involve downloading a half dozen things and configuring IIS (which is half the point of dotnet core, lol). It also gets you ready for the day everything is on command line and you don't need Visual Studio (yeah, right).
Sometimes, developers get given tasks outside of their usual area of responsibility. For example, dealing with gzipping.
Gzip is a compression algorithm that's existed for over 25 years. It's a standard on the Internet and almost everything is served gzipped if it is served properly. There's various ways to deal with this, for example just letting the webserver gzip on the fly. However, you may run into a situation where that is impossible. For example, you may have some artificial limit of file size of less than 1 MB.
(no code splitting is not always an answer; in particular, if you have an integration between different products, code splitting creates an unstable integration between two different products with different release cycles. a little bit of knowledge is a dangerous thing without the details.)
And of course even if you managed to gzip, if the infrastructure cannot guarantee the CONTENT-TYPE and CONTENT-ENCODING HTTP Headers and even more HTTP headers like Vary: Accept-Encoding, then the browser may decide to download the gzipped files instead of ungzipping by itself. Or simply crash.
It is also a general ask for JavaScript developers, particularly on full stack JavaScript (for example with Express as the webserver) to deal with gzipping manually. However, who knows where it will be served? It could be served on Apache, on IIS or on a CDN. So chances are, you will be asked during your career to gzip files where
a) you cannot guarantee the HTTP headers
or have other restrictions such as
b) cannot guarantee the file size (as of May 2018, there are 400 issues open in the webpack issue tracker for the split by file size . Even if code splitting by file size (actually called chunking) is done, it's experimental and bug ridden. And besides, splitting into many files is not compression... unless you serve over HTTP2 serving many files introduces an overhead. Gzipping is a standard, it must be done and the gains are too big to ignore. We are looking at gains of 5 to 10 times.
(in case you are wondering, no you cannot access the browser's native ungzip facility with JavaScript -- that is only accessible if the HTTP headers are present, and you never have access to the raw script text anyway due to cross origin policy so you will be looking at an AJAX request. If you can't make an AJAX request because of missing Access-Control-Allow-Origin or missing whitelisting tough shit, you got much bigger problems).
So what is a developer to do? Wash his hands and blame the ops guys? Who cares about gzip right, it's not our problem it's the server's problem. In fact who cares about user experience at all it can take ten seconds to load we will just wash our hands of these stupid server troubles. We are not server guys we are developers who cares about HTTP headers and how it's hosted right?
Of course not. Let's put the Dev back in DevOps and ungzip on the fly, with or without HTTP headers, on any infrastructure (well except for the Access-Control-Allow-Origin header that everyone has). Yeah baby! It will be dirty, messy but it will work.
Build Process
You can gzip in many ways, for example with this plugin if you are using webpack.
You can also just use the Linux gzip utility as part of your build process. The Client Side Code (or, the SECRET SAUCE)
We will use the library pako.js to ungzip on the fly, with or without the correct HTTP headers.
In order to make sure the JavaScript files load in the correct order, we will use JavaScript Promises (which we will require a shim for IE support) and the JavaScript Fetch API (which also requires a shim for IE support). These are the required libraries.
1. Load scripts in the correct order
2. Deal with HTTP errors with a CheckXHR method (write this!)
3. Fallback to the JS version, should an error occur
fetch('http://www.example.com/test.js.gz')
.then(CheckXHR)
.then(function (response) {
return response.arrayBuffer(); // important to pass to PAKO.JS as array not as string
})
.then(function (arr) {
return injectScript(arr);
})
// load more scripts here
.onError(function (response) {
// deal with error... I suggest loading the ungzipped JS files as a fallback here
});
We will dynamically inject the script, again returning a JavaScript promise on completion. Because we are dealing with text in the inner HTML tag, we don't have to use onload or onreadystatechange (IE).
function injectScript(arr) {
return new Promise(function (resolve, reject) {
var script = document.createElement('script');
script.text = pako.ungzip(arr, { to: 'string' });
document.head.appendChild(script);
resolve();
});
}
And there we go, complete.
With great power comes great responsibility; make sure you measure the performance in the browser to see the decrease not only in file size but how long it takes to actually use the web application.
Hopefully this helps someone
P.S. Message to server guys : we can code on a 386 or a RaspberryPi or a Commodore 64 or TRS-80 or string and yarn and foodstuffs and but that doesn't mean it's a good idea or a good use of time or money or resources. Upgrade your infrastructure to allow gzipping of any arbitrary file size with the correct HTTP headers and make the infrastructure work with the developers not against them, because the next time the problem might not be so (un)easy to solve.
If you cannot connect with your Microsoft Surface or Surface Pro with WiFi to any Internet and have tried everything else, look at the date
The WiFi will refuse to connect without a correct date... you don't have to be exact to the millisecond, but you do have to be within the minute range particularly with corporate networks
Change the date and time manually to match the correct date and WiFi may magically work again
This is after disabling IPv6 and other various suggestions you may find elsewhere -- do some Googling
Just read an article by Robert Martin, of "Clean Code" fame (if you haven't heard of Clean Code, read it -- it's probably the seminal text for clean object-oriented programming... some of the advice in it is dated with test-driven development and Java, but it is worth a skim at least)
"Without software: Phones don't ring. Cars don't start. Planes don't fly. Bombs don't explode. Ships don't sail. Ovens don't bake. Garage doors don't open. Money doesn't change hands. Electricity doesn't get generated. And we can't find our way to the store. Nothing happens without software. And what is software? Software is a set of rules."
Also, this
"If the ranks of programmers has doubled every five years, then it stands to reason that most programmers were hired within the last five years, and so about half the programmers would be under 28. Half of those over 28 would be less than 33. Half of those over 33 would be less than 38, and so on. Less than 0.5% of programmers would be 60 or over. So most of us old programmers are still around writing code. It's just that there never were very many of us.
What does this imply for our industry?
Maybe it's not as bad as Lord of the Flies, but the fact that juniors exponentially outnumbers seniors is concerning. As long as that growth curve continues[4] there will not be enough teachers, role models, and leaders. It means that most software teams will remain relatively unguided, unsupervised, and inexperienced. It means that most software organizations will have to endlessly relearn the lessons they learned the five years before. It means that the industry as a whole will remain dominated by novices, and exist in a state of perpetual immaturity." - Robert Martin,
Basically the problem is this, more in 2017 than ever -- the world has unwittingly ceded control of its financial, healthcare and private personal information for better or worse to computer programmers. We have a duty to create systems which are maintainable, robust and error free, even if it costs us in the short term.
What are the problems? The problems are in-your-face, serious, and unfortunately have nothing at all to do with coding or computer programming.
Example #1 (easy): Boss asks for deadline, you can take a shortcut. Either you can take a shortcut, or take 20% more time... piss them off now and make them happy later, or go for the short term gain.
Example #2 (hard): You are in a responsibility of great authority to pick a framework or a technology and you can either choose what is cool and hot, and therefore good for your career (great, one more line for your resume!), or choose proven but less cool and less interesting technologies. Balance this against whether or not the technology is about to go out of the market (you can write a website in COBOL but it is NOT a good idea!)
Example #3 (very hard): You have the ear of business people, and you must convince them of the need to create a process or build a framework or library which has no readily apparent business value and no readily apparent use cases, but will increase developer productivity ten-fold down the line, or allow you to enter emerging markets or attack potential competitors.
Example #4 (extreme): You must either sacrifice personal time and personal emotion and energy to create a process / library / frameworks for your company or down the road you see the end of your company or business (at least tech-wise)... the tech is so bad nobody will want to work there or stay there, you see the train coming a mile away but you are superglued to the tracks at least at work. So you either have to sacrifice, in order to move the company in the right direction, and get 0 credit for it, or hold your tongue and hope that the world works differently than you think.
What are the answer to these problems? I could give my answers, but they would be my answers.
The point is, there are no right answers... it depends on the situation, the market, and most of all experience. And, if Robert Martin's numbers are to be believed, experience is severely lacking.
I don't pretend to know anything about making money or business. Maybe markets are all about point in time and maybe writing spaghetti code and awful code is the way to do it -- forget about "tech" things like build processes, GitHub, Open Source, frameworks, automation, libraries and command line tools. Forget about The Art of Unix Programming and give the business people what they want, all the time, because the market wants now and only now and later will be too late because the market won't exist anymore.
Or, we could draw a line and say this far and no further... the line must be drawn here. Either take the time to do it right, or suffer the consequences.
How many "senior" developers, and technical leads and architects know this? How many programmers even care about these issues?
In the end we must all do things we can live with. We all have our own moral codes and standards. The choice is easy. Living with what we choose is the hard part.
5. Get the URL from GitHub and Push Code to GitHub
git remote add origin url
git push -u origin master (you may be prompted for GitHub login + username)
See the file successfully on GitHub!
Future Steps
1. Learn what a branch is (the whole point of DVCS - distributed version control and Git!)
2. Learn what a fork is (the whole point of GitHub!)
3. Fork an existing open source project with an issue you can solve
Pick a language, or learn a language! (JavaScript / C# / Java / C++ / C / whatever!)
If you don't know ANY programming, play around with this then take some "intro to programming" or "learn programming" course (preferably one that's fun)
Start small, change one line or handful of lines of code!
4. Commit and push your change to your own fork, then issue a pull request!