Difference between revisions of "Computing Fails"
Line 265: | Line 265: | ||
==IPv6 Is Badly Designed== | ==IPv6 Is Badly Designed== | ||
− | IPv6. It was supposed to take over IPv4 over 20 years ago (yes, people were talking about IPv6 in the year 2000). In 2020, it still hasn't. Why? Because it's crap, and everyone knows it. However, the amount of corporate money invested into it, means that it may eventually get | + | IPv6. It was supposed to take over IPv4 over 20 years ago (yes, people were talking about IPv6 in the year 2000). In 2020, it still hasn't. Why? Because it's crap, and everyone knows it. However, the amount of corporate money invested into it, means that it may eventually get forced onto the masses. It's not even easy to find criticisms of the technology these days. It's all being astroturfed by shills. It's like a bad sequel. It's worse than the original. But you can't avoid it. |
Ref: https://web.archive.org/web/20070707080506/http://tech.hellyeah.com/display_doc.phtml?id=28 | Ref: https://web.archive.org/web/20070707080506/http://tech.hellyeah.com/display_doc.phtml?id=28 | ||
Line 300: | Line 300: | ||
</pre> | </pre> | ||
− | Less is more. The answer isn't "add more people". The answer isn't "add more ip addresses". Things can't expand exponentially forever. There has to be limits in place. | + | Less is more. The answer isn't "add more people". The answer isn't "add more ip addresses". Things can't expand exponentially forever. There has to be limits in place. It has to be practically usable. |
If you look at it right, neither ipv4 nor ipv6 is designed correctly. ipv4 is too simple, and ipv6 is too complicated. They are both fails. | If you look at it right, neither ipv4 nor ipv6 is designed correctly. ipv4 is too simple, and ipv6 is too complicated. They are both fails. | ||
+ | |||
+ | It's been said, but if they had made ipv4 5 octets from the start, ip addresses would not have run out for decades. Maybe not ever. | ||
==Hidden Files== | ==Hidden Files== |
Revision as of 21:04, 20 December 2024
Mistakes made by flawed humans.
Confirmed Fails
- IPv6. Decades after it's introduction, still no one wants to use it. At some point, you have to admit, that IPv6 was a fail. And engineers should make a replacement.
- Windows after XP. XP was a great operating system. But it was the last great Windows.
- ARM on cellphones. Millions (billions?) of devices that are unusable after 5 years. Landfill.
- CrowdStrike - Boy, this really "Striked, the IT Crowd / Made the IT Crowd Strike"... Get it? [1]
Potential Fails
- systemd. While it works, it is practically no better than the previous solutions, and is a mess of LOCs.
- wayland. This may end up making a whole lot of work for people, or it may fix what was wrong with X (whatever that was. I don't know since I don't work with X much).
Too Much
Programs that are full of junk or otherwise misleading. With wget, we have too much info. This example isn't specific to wget, but is a common sight. Less is more.
usable wget
$ wget -h wget: unrecognized option: h BusyBox v1.31.1 () multi-call binary. Usage: wget [-c|--continue] [--spider] [-q|--quiet] [-O|--output-document FILE] [-o|--output-file FILE] [--header 'header: value'] [-Y|--proxy on/off] [-P DIR] [-S|--server-response] [-U|--user-agent AGENT] [-T SEC] URL... Retrieve files via HTTP or FTP --spider Only check URL existence: $? is 0 if exists -c Continue retrieval of aborted transfer -q Quiet -P DIR Save to DIR (default .) -S Show server response -T SEC Network read timeout is SEC seconds -O FILE Save to FILE ('-' for stdout) -o FILE Log messages to FILE -U STR Use STR for User-Agent header -Y on/off Use proxy
unusable wget
$ wget -h GNU Wget 1.20.1, a non-interactive network retriever. Usage: wget [OPTION]... [URL]... Mandatory arguments to long options are mandatory for short options too. Startup: -V, --version display the version of Wget and exit -h, --help print this help -b, --background go to background after startup -e, --execute=COMMAND execute a `.wgetrc'-style command Logging and input file: -o, --output-file=FILE log messages to FILE -a, --append-output=FILE append messages to FILE -d, --debug print lots of debugging information -q, --quiet quiet (no output) -v, --verbose be verbose (this is the default) -nv, --no-verbose turn off verboseness, without being quiet --report-speed=TYPE output bandwidth as TYPE. TYPE can be bits -i, --input-file=FILE download URLs found in local or external FILE -F, --force-html treat input file as HTML -B, --base=URL resolves HTML input-file links (-i -F) relative to URL --config=FILE specify config file to use --no-config do not read any config file --rejected-log=FILE log reasons for URL rejection to FILE Download: -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits) --retry-connrefused retry even if connection is refused --retry-on-http-error=ERRORS comma-separated list of HTTP errors to retry -O, --output-document=FILE write documents to FILE -nc, --no-clobber skip downloads that would download to existing files (overwriting them) --no-netrc don't try to obtain credentials from .netrc -c, --continue resume getting a partially-downloaded file --start-pos=OFFSET start downloading from zero-based position OFFSET --progress=TYPE select progress gauge type --show-progress display the progress bar in any verbosity mode -N, --timestamping don't re-retrieve files unless newer than local --no-if-modified-since don't use conditional if-modified-since get requests in timestamping mode --no-use-server-timestamps don't set the local file's timestamp by the one on the server -S, --server-response print server response --spider don't download anything -T, --timeout=SECONDS set all timeout values to SECONDS --dns-timeout=SECS set the DNS lookup timeout to SECS --connect-timeout=SECS set the connect timeout to SECS --read-timeout=SECS set the read timeout to SECS -w, --wait=SECONDS wait SECONDS between retrievals --waitretry=SECONDS wait 1..SECONDS between retries of a retrieval --random-wait wait from 0.5*WAIT...1.5*WAIT secs between retrievals --no-proxy explicitly turn off proxy -Q, --quota=NUMBER set retrieval quota to NUMBER --bind-address=ADDRESS bind to ADDRESS (hostname or IP) on local host --limit-rate=RATE limit download rate to RATE --no-dns-cache disable caching DNS lookups --restrict-file-names=OS restrict chars in file names to ones OS allows --ignore-case ignore case when matching files/directories -4, --inet4-only connect only to IPv4 addresses -6, --inet6-only connect only to IPv6 addresses --prefer-family=FAMILY connect first to addresses of specified family, one of IPv6, IPv4, or none --user=USER set both ftp and http user to USER --password=PASS set both ftp and http password to PASS --ask-password prompt for passwords --use-askpass=COMMAND specify credential handler for requesting username and password. If no COMMAND is specified the WGET_ASKPASS or the SSH_ASKPASS environment variable is used. --no-iri turn off IRI support --local-encoding=ENC use ENC as the local encoding for IRIs --remote-encoding=ENC use ENC as the default remote encoding --unlink remove file before clobber --xattr turn on storage of metadata in extended file attributes Directories: -nd, --no-directories don't create directories -x, --force-directories force creation of directories -nH, --no-host-directories don't create host directories --protocol-directories use protocol name in directories -P, --directory-prefix=PREFIX save files to PREFIX/.. --cut-dirs=NUMBER ignore NUMBER remote directory components HTTP options: --http-user=USER set http user to USER --http-password=PASS set http password to PASS --no-cache disallow server-cached data --default-page=NAME change the default page name (normally this is 'index.html'.) -E, --adjust-extension save HTML/CSS documents with proper extensions --ignore-length ignore 'Content-Length' header field --header=STRING insert STRING among the headers --compression=TYPE choose compression, one of auto, gzip and none. (default: none) --max-redirect maximum redirections allowed per page --proxy-user=USER set USER as proxy username --proxy-password=PASS set PASS as proxy password --referer=URL include 'Referer: URL' header in HTTP request --save-headers save the HTTP headers to file -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION --no-http-keep-alive disable HTTP keep-alive (persistent connections) --no-cookies don't use cookies --load-cookies=FILE load cookies from FILE before session --save-cookies=FILE save cookies to FILE after session --keep-session-cookies load and save session (non-permanent) cookies --post-data=STRING use the POST method; send STRING as the data --post-file=FILE use the POST method; send contents of FILE --method=HTTPMethod use method "HTTPMethod" in the request --body-data=STRING send STRING as data. --method MUST be set --body-file=FILE send contents of FILE. --method MUST be set --content-disposition honor the Content-Disposition header when choosing local file names (EXPERIMENTAL) --content-on-error output the received content on server errors --auth-no-challenge send Basic HTTP authentication information without first waiting for the server's challenge HTTPS (SSL/TLS) options: --secure-protocol=PR choose secure protocol, one of auto, SSLv2, SSLv3, TLSv1, TLSv1_1, TLSv1_2 and PFS --https-only only follow secure HTTPS links --no-check-certificate don't validate the server's certificate --certificate=FILE client certificate file --certificate-type=TYPE client certificate type, PEM or DER --private-key=FILE private key file --private-key-type=TYPE private key type, PEM or DER --ca-certificate=FILE file with the bundle of CAs --ca-directory=DIR directory where hash list of CAs is stored --crl-file=FILE file with bundle of CRLs --pinnedpubkey=FILE/HASHES Public key (PEM/DER) file, or any number of base64 encoded sha256 hashes preceded by 'sha256//' and separated by ';', to verify peer against --ciphers=STR Set the priority string (GnuTLS) or cipher list string (OpenSSL) directly. Use with care. This option overrides --secure-protocol. The format and syntax of this string depend on the specific SSL/TLS engine. HSTS options: --no-hsts disable HSTS --hsts-file path of HSTS database (will override default) FTP options: --ftp-user=USER set ftp user to USER --ftp-password=PASS set ftp password to PASS --no-remove-listing don't remove '.listing' files --no-glob turn off FTP file name globbing --no-passive-ftp disable the "passive" transfer mode --preserve-permissions preserve remote file permissions --retr-symlinks when recursing, get linked-to files (not dir) FTPS options: --ftps-implicit use implicit FTPS (default port is 990) --ftps-resume-ssl resume the SSL/TLS session started in the control connection when opening a data connection --ftps-clear-data-connection cipher the control channel only; all the data will be in plaintext --ftps-fallback-to-ftp fall back to FTP if FTPS is not supported in the target server WARC options: --warc-file=FILENAME save request/response data to a .warc.gz file --warc-header=STRING insert STRING into the warcinfo record --warc-max-size=NUMBER set maximum size of WARC files to NUMBER --warc-cdx write CDX index files --warc-dedup=FILENAME do not store records listed in this CDX file --no-warc-compression do not compress WARC files with GZIP --no-warc-digests do not calculate SHA1 digests --no-warc-keep-log do not store the log file in a WARC record --warc-tempdir=DIRECTORY location for temporary files created by the WARC writer Recursive download: -r, --recursive specify recursive download -l, --level=NUMBER maximum recursion depth (inf or 0 for infinite) --delete-after delete files locally after downloading them -k, --convert-links make links in downloaded HTML or CSS point to local files --convert-file-only convert the file part of the URLs only (usually known as the basename) --backups=N before writing file X, rotate up to N backup files -K, --backup-converted before converting file X, back up as X.orig -m, --mirror shortcut for -N -r -l inf --no-remove-listing -p, --page-requisites get all images, etc. needed to display HTML page --strict-comments turn on strict (SGML) handling of HTML comments Recursive accept/reject: -A, --accept=LIST comma-separated list of accepted extensions -R, --reject=LIST comma-separated list of rejected extensions --accept-regex=REGEX regex matching accepted URLs --reject-regex=REGEX regex matching rejected URLs --regex-type=TYPE regex type (posix|pcre) -D, --domains=LIST comma-separated list of accepted domains --exclude-domains=LIST comma-separated list of rejected domains --follow-ftp follow FTP links from HTML documents --follow-tags=LIST comma-separated list of followed HTML tags --ignore-tags=LIST comma-separated list of ignored HTML tags -H, --span-hosts go to foreign hosts when recursive -L, --relative follow relative links only -I, --include-directories=LIST list of allowed directories --trust-server-names use the name specified by the redirection URL's last component -X, --exclude-directories=LIST list of excluded directories -np, --no-parent don't ascend to the parent directory Email bug reports, questions, discussions to <bug-wget@gnu.org> and/or open issues at https://savannah.gnu.org/bugs/?func=additem&group=wget.
Conclusion
No human being can digest all the information in the latter. It's a mess. Perhaps another flag can be used for the verbose help information, but for average daily usage, busybox wins here.
EDIT: it looks like I am not the only one who thinks this way as someone has created: https://github.com/tldr-pages/tldr
Javascript and White Space bloat
Discourse. I hate it so much. It's as if someone who had never used a forum before thought about designing forums. They then looked at all the reasonable spacing and coherent navigation offered by php forums from the 2000's, and said: "No, I must have more white space, and a font size that is 3 times as big. I must also make it slower, by adding superfluous javascript libraries.". Thus, discourse was created: To deter people from actually using forums effectively.
Forums have been nerfed apparently in favor of social media. In fact, most websites have been nerfed in favor of monolithic 'feeds'.
IPv6 Is Badly Designed
IPv6. It was supposed to take over IPv4 over 20 years ago (yes, people were talking about IPv6 in the year 2000). In 2020, it still hasn't. Why? Because it's crap, and everyone knows it. However, the amount of corporate money invested into it, means that it may eventually get forced onto the masses. It's not even easy to find criticisms of the technology these days. It's all being astroturfed by shills. It's like a bad sequel. It's worse than the original. But you can't avoid it.
Ref: https://web.archive.org/web/20070707080506/http://tech.hellyeah.com/display_doc.phtml?id=28
To begin with, IPv6 will use a 128 bit address space, as compared to IPv4's 32 bits. This 128 bit address is the biggest reason for IPv6. It is supposed to give us plenty of IP addresses to last us well into the future. However, 128 bits is overkill. Sixty-four should be more than enough. Let's put it in terms that are easier to comprehend. We'll assume that there are 15 billion people on the planet. (currently, there are only 6.1 billion, and the largest estimates for the next fifty years project 12 billion). With 128 bits per IP address, each person could have 2.26e28 IP addresses to themselves. If we reduce it to 64 bits, each person would still have 1.2 billion addresses. What about the people who want to put IP addresses on every light pole, mailbox, stop light and street sign? Well, there are about 148 million square kilometers of land on the Earth (not including ocean). With 128 bits, we have almost 2.3e20 addresses per square centimeter. If we reduce that to 64 bits, each square centimeter would still have about 12.5 IP addresses -- more than enough. ...(Sections Omitted) Part of IPv4's immense success was due to its simplicity and rigid structure. It does the job it was meant to do quickly and effectively. IPv6 adds all sorts of bells and whistles that are unneccesary and even detrimental. With all the wasted space and privacy issues, I'd rather have the IETF go back to the drawing board and come back with IPv7
Less is more. The answer isn't "add more people". The answer isn't "add more ip addresses". Things can't expand exponentially forever. There has to be limits in place. It has to be practically usable.
If you look at it right, neither ipv4 nor ipv6 is designed correctly. ipv4 is too simple, and ipv6 is too complicated. They are both fails.
It's been said, but if they had made ipv4 5 octets from the start, ip addresses would not have run out for decades. Maybe not ever.
Hidden Files
alias ls='ls -a'
Hidden files are evil. All ls should be ls -a. They will bite you as often as pcb makers get bitten by incorrect footprints. Its bad design. It will never go away. There's no thinking that you should've been smarter. Lies. Its the human factor. It will always, always bite you someday. Hidden files are fundamentally flawed. They will always cause trouble.
Duplication of Work
There are dozens of companies and individuals doing source code management tools. Meta-coding. The duplication of work can't ever have been this bad, and the duplication here isn't even an actual tool: it's a tool for a tool. Go ahead and search for source code / git hosting. Prepare yourself, there are way too many options. It's to the point of hyperbole.
Western cultures need to take a lesson from china. One of china's strengths, is not necessarily cutting corners, but in not doing more than is needed to complete a task. It's about efficiency. And it is possibly a sin to indulge in things that aren't necessary. Time and Life is short. Time management is critical.
Intel ME
"Whomsoever diggeth a pit, shall fall in it." See libreboot.org.
God curse the surveillance state.
Inconsistent flag usage for common commands: chown / scp and the recursive flag
Look at scp and chown. Then look at the recursive flag.
scp -r chown -R
They are both recursive. I know I've stumbled across this probably over a hundred times in the last decade using Linux.
There should be an unwritten standard where lower case is used for the most common operations. kind of like how there is an unwritten standard for commands to give help when you type command -h or --help. Nut there is none. Cases are all over the place. it's a mess. simple things. How many millions of moments have been lost to people typing the wrong -r or -R. It adds up over billions of man hours. It's a fail.
Someone really screwed up here.
A solution, would be for flags to be case insensitive. Is there any reasonable use case where you'd want to be able to use both -r and -R for different things in the same program? It's just confusing.
ARM
"That sure is a lot of non-reusable trash you are creating there (cellphones)."
OAuth
"We need to make it more difficult for users to get their data" - gmail and office365 email monopoly collusion