Uncategorized

Async await

Posted on Updated on

Async await

Async/await has come up a lot recently on twitter, coding videos, and blogs. But what is it? Seems like it would be used for callbacks or promises. How something is declared as a async function is that we place async before the function definition. “The word “async” before a function means the function will return a promise.. And if the return value is not a promise, the return is converted to a resolved promise automatically.

function cool(){
  return "cool"
}
console.log(cool());
//cool

async function cool2(){
  return "cool2"
}
console.log(cool2());
//SyntaxError: Unexpected token function

Hmm why did we get an unexpected token function? As the async function is returning a promise, we cant log it, so lets do something else.

cool2().then(a=>{console.log(a)}) 
//now we get the output of “cool2” 
//but we get Promise {<resolved>: undefined}

If we call an empty then, we can see the resolved promise wrapper that is returned.

cool2().then() //Promise {<resolved>: "cool2"}

So now we are seeing that we do get everything wrapped in a promise, even if the return isn’t a promise. To wrap it up we would need to use await. “The await operator is used to wait for a Promise. It can only be used inside an async function.” [MDN] The syntax is returnvalue = await expression,

async function cool3(){
  let rv = await "cool3"
  return rv
}
console.log(cool3()); // we get a pending promise
cool3(); // we get a resolved  promise 
// Promise {<resolved>: "cool3"}

 

What good is this? What would we use it for? Well it appears async await is syntactic sugar for promises, so it’s supposed to make promises look cleaner. The code below is what babel generated from cool3(). Wow, Ill take async/await.

//GENERATED FROM BABEL:
var cool3 = function () {
  var _ref = _asyncToGenerator( /*#__PURE__*/regeneratorRuntime.mark(function _callee() {
    var rv;
    return regeneratorRuntime.wrap(function _callee$(_context) {
      while (1) {
        switch (_context.prev = _context.next) {
          case 0:
            _context.next = 2;
            return "cool3";

          case 2:
            rv = _context.sent;
            return _context.abrupt("return", rv);

          case 4:
          case "end":
            return _context.stop();
        }
      }
    }, _callee, this);
  }));

  return function cool3() {
    return _ref.apply(this, arguments);
  };
}();

function _asyncToGenerator(fn) { return function () { var gen = fn.apply(this, arguments); return new Promise(function (resolve, reject) { function step(key, arg) { try { var info = gen[key](arg); var value = info.value; } catch (error) { reject(error); return; } if (info.done) { resolve(value); } else { return Promise.resolve(value).then(function (value) { step("next", value); }, function (err) { step("throw", err); }); } } return step("next"); }); }; }

 

What is a generator?

Posted on Updated on

What is a generator?

What is a generator in JavaScript? Generators and Iterators seem to be linked, According to MDN, “The Generator object is returned by a generator function and it conforms to both the iterable protocol and the iterator protocol.” Still confused? You’re not the only one. Ok, let’s dig deeper into this.” Looking around, I see that Arfat on his blog wrote “A generator is a function that can stop midway and then continue from where it stopped. In short, a generator appears to be a function but it behaves like an iterator.” That seems to be starting to make more sense.

Looking at the generator from the perspective of an iterator that can pause and continue when needed, this could have many applications. A simple way of looking at it is that you allow yourself 5 Oreos a day. That’s cookies not packages! So if we had a generator “takeOreo” for Oreos, we could call “takeOreo”, and it would give back the cookie number we are on. Once we had our 5, if we called takeOreo again, it would be “done”. “A generator is a function that can stop midway and then continue from where it stopped.” So this appears like we made a pseudogenerator using Oreos and our diet.

How can we code the above pseudocode?  The syntax for a generator function is “function * name” not “function name” Looks like according to MDN, it’s just convention. “The function* declaration (function keyword followed by an asterisk) defines a generator function, which returns a Generator object.” Lets start coding. Lets start of naming the generator

function* giveOreo(){   
}

Now that we have a name, lets add the body. The thing is, when we call a generator function, its not going to start iterating immediately. Rather, we get back an iterator object. But how do we create that objecT? “ When the iterator’s next() method is called, the generator function’s body is executed until the first yield expression, which specifies the value to be returned from the iterator ” OK lets add this:

function* giveOreo(){
  let index=1; //start with 1, you cant get 0 cookies if you take a cookie
  while(index <= 5){
    yield `You eat cookie ${index++}`
  }
}


 

Ok how would we run it now that we have a function?

let todaysOreos = giveOreo()
console.log(todaysOreos.next().value) //You eat cookie 1
console.log(todaysOreos.next().value) //You eat cookie 2
console.log(todaysOreos.next().value) //You eat cookie 3
console.log(todaysOreos.next().value) //You eat cookie 4
console.log(todaysOreos.next().value) //You eat cookie 5

Great. We have a our 5 cookies. What about when we call it again?

console.log(todaysOreos.next().value) // nothing returned

So no more cookies. But can we force the generator to tell us that there are no more cookies? “ When a generator is finished, subsequent next calls will not execute any  of that generator’s code, they will just return an object of this form: {value: undefined, done: true}.” But to get the generator to be “done” we have to return something. Lets do that.

If we change the code, we can get back one last message then then the generator is done:

function* giveOreo(){
  let index=1; //start with 1, you cant get 0 cookies if you take a cookie
  while(index <= 5){
    yield `You eat cookie ${index++}`
  }
  return "No more cookies for you"
}

let todaysOreos = giveOreo()
console.log(todaysOreos.next().value) //You eat cookie 1
console.log(todaysOreos.next().value) //You eat cookie 2
console.log(todaysOreos.next().value) //You eat cookie 3
console.log(todaysOreos.next().value) //You eat cookie 4
console.log(todaysOreos.next().value) //You eat cookie 5
console.log(todaysOreos.next().value) // "No more cookies for you"
console.log(todaysOreos.next().value) // undefined

Now if we REALLY wanted to just keep getting messages instead of undefined we could do this: make a permanent message loop after the cookie loop is done This could be a little more useful than an undefined.

function* giveOreo(){
let index=1; //start with 1, you cant get 0 cookies if you take a cookie
  while(index <= 5){
    yield `You eat cookie ${index++}`
  }
  while(true){yield "No more cookies for you"}
}

let todaysOreos = giveOreo()
console.log(todaysOreos.next().value) //You eat cookie 1
console.log(todaysOreos.next().value) //You eat cookie 2
console.log(todaysOreos.next().value) //You eat cookie 3
console.log(todaysOreos.next().value) //You eat cookie 4
console.log(todaysOreos.next().value) //You eat cookie 5
console.log(todaysOreos.next().value) //"No more cookies for you"
console.log(todaysOreos.next().value) // No more cookies for you
console.log(todaysOreos.next().value) // No more cookies for you

The rest parameter (…args) vs the arguments object

Posted on Updated on

The rest parameter (…args) vs the arguments object

So I was listening to the Syntax.fm podcast, which is about various web development topics. In one of the episodes they were talking about the rest parameter, and one of the hosts had mentioned that they were using it often. I decided to check it out. According to Mozilla “the rest parameter syntax allows us to represent an indefinite number of arguments as an array.”[A] Just by looking at this statement, I realize how useful this could be. One scenario that crossed my mind was if we have an unknown number of arguments bein passed in, we can rely on the rest parameter to handle the acceptance, then use the functions logic to form a logic path based on the number of arguments being passed in.

Why not use the arguments array?

A good question would be, why care about the rest parameter, if JavaScript already has an arguments array? Well, the thing is, the arguments OBJECT is an ARRAY-LIKE STRUCTURE, meaning it’s not an array. It’s like an array, in that we can access arguments inside the “arguments object” by using indexed entries, such as argument[3]. Unlike an array, we cannot use array features such as map, push, pop, etc. As per Mozilla, it “does not have any Array properties except length.”[B] What the rest parameter does is place unnamed arguments inside an array, which we can manipulate as any other array.

Why use the arguments object at all?

The arguments object does still have it’s uses. Only unnamed parameters are placed in the rest parameter, so to access named parameters you still need to use their binding names, or reference their index in the arguments object. Another feature that the arguments object has is the callee function, which holds the current executing function. This itself can be it’s own topic, so I’ll let the reader look it up further for now.

In conclusion, both labels have their own reasons to be used, and I would like to think of them as complementary to each other.

Linters

Posted on Updated on

Having attended several meetups and conferences, I have heard of the term “linter” used. I wasn’t sure what it was and was surprised that I had not used it before. But actually, I have but didn’t know that it was a linter.“A linter or lint refers to tools that analyze source code to flag programming errors, bugs, stylistic errors, and suspicious constructs” (A) In CS50, an Introduction to Computer Science, we used a program called “style50”, that would look for stylistic errors. Wow I was already using a linter and didn’t even know it.

How it works is when you pass code to a linter it will scan the code and give messages. The return messages could be syntax errors, structural problems like the use of undefined variables, or best practice or code style guideline violations. Whats great about is is that we can get immediate feedback on bugs that might only show their head in a specific instance, and that isn’t something we would want to release into production!

“The term “lint” was derived from the name of the undesirable bits of fiber and fluff found in sheep’s wool.”(B) Stephen Johnson created the first lint program out of need to check his own code. In my mind, I kind of compare it to the tests that we create today. The linter is a large generic test framework rather than the small tests that we write for specific apps today, but the concept is the same. Here is a quote from Stephen:  

I originally wrote lint to help me debug the YACC grammar I was writing for C. I had the grammar written, but no way to test it! I could run it on the body of C code that existed, but I had no way of telling whether I had correctly “understood” the programs. So I hit on the idea of checking the consistency between function calls and definitions – Stephen Johnson (C)

Thanks to Stephen’s need, linters are now used all over the software engineering space. If you use Javascript here are some links for JS linters

 

 

CI/CD

Posted on Updated on

What is CI/CD?

CI/CD stands for Continuous integration and continuous delivery. It is a set of principles that development teams use to build, test, and release code. A lot of companies will have this as part of their software lifecycle, so it is definitely something new developers should at least be familiar with in theory, if not practice.

Continuous integration

“The technical goal of CI is to establish a consistent and automated way to build, package, and test applications” [A]. With consistency in the integration process in place, teams are more likely to commit code changes more frequently,

“Continuous integration is the practice of routinely integrating code changes into the main branch of a repository, and testing the changes, as early and often as possible”[B]. Waiting for changes to be uploaded can cause conflicts between builds, and finding a bug after several days of coding can lead to lost work. It’s a financially secure decision to bring and test changes earlier then let branches linger and diverge. If two developers need a function to do a certain task, and Dev1 made and tested that several days ago, while Dev2 needs that function also, Dev2 could possibly end up writing similar code that during merge time, may be discarded. Or the alternate scenario, Dev2 merge their changes but Dev1 has other functions that needed their version of the function now Dev1 needs to go back and change their subfunctions.

Continuous delivery.

automates the delivery of applications to selected infrastructure environments. Most teams work with multiple environments other than the production, such as development and testing environments, and CD ensures there is an automated way to push code changes to them.  CD automation then performs any necessary service calls to web servers, databases, and other services that may need to be restarted or follow other procedures when applications are deployed.

 

Continuous delivery is the practice “where code changes are automatically built, tested, and prepared for a release to production”[C]. It follows CI by “deploying all code changes to a testing environment and/or a production environment after the build stage”. By testing the changes before deployment, downtime in a production environment can be minimized. Also by continuously delivering changes, if any errors were passed onto production, only a minimal amount of changes can potentially be rolled back upon, since errors should rear themselves up earlier.

 

Continuous Delivery vs. Continuous Deployment

Why Continuous Delivery and not Continuous Deployment? “The difference between continuous delivery and continuous deployment is the presence of a manual approval to update to production. With continuous deployment, production happens automatically without explicit approval.”[E] Continuous Deployment might make sense in an environment, say Netflix or Youtube, where a small piece of downtime while troublesome, will not cause problems with the customer base. Continuous Delivery with manual checking could make more sense in a scenarios where customers are more fickle, say Amazon, where a person who would want a Garlic press right now, if unable to purchase would come to their sense and say, I would never actually use that. With Continuous Delivery, if there were a scenario that tests did not account for, it is possible manual approval might think of a new test that would catch that error.

Serverless: the new trend

Posted on Updated on

Recently the new hot buzzword seems to be “serverless”. There is a meetup group called “Serverless NYC”, and there have been a few “Serverless” conferences. Lambda is the new tool from Amazon that gets thrown around by developers at every meetup. But what are we talking about exactly? Just to be clear, even though the name is serverless, when you eventually get to the bottom of the tootsie roll pop, there is a server there. It’s just about who controls it.

The people who are using the services provided are still running their work on someone’s server.”With a serverless architecture, you focus purely on the individual functions in your application code. Services take care of all the physical hardware, virtual machine operating system, and web server software management.”[1] This allows the developer to focus on being a developer, and not  necessarily require the extra skillset of knowing, Linux, networking, and infrastructure. Since the focus here are functions, it should probably be some smaller project, or something that can be independent of the scope of a larger project.

The benefits of a serverless architecture is no need to deploy or administer any servers, install software, or run updates. That is handled by your provider. If you need more compute you can just purchase more consumption rather than build and deploy new servers, even if just virtual machines. You don’t need to architect for availability and fault tolerance  since that too is built into the architecture you are buying. Availability would refer to your service being up, while Fault Tolerance would refer to withstanding the failure of a sever.

With every upside there is a downside. Some of the downsides of serverless computing are:  1) multitenancy problems, which means that if the provider over provisions their resources and available compute is limited, it will affect your services. 2)vendor lock-in: if you design your services around the capabilities and api’s of provider A, then it makes it much harder to move to provider B. 3) security concerns – you don’t really know the status of the security of your host. You trust your provider, but for those with the need for absolute control or some regulations, they need to make sure the solution is viable for their needs.

While there are some downsides, there are many potential upsides. Serverless is here to stay and should be weighted as a viable solution for any deployment.

Passenger: the web application server for rails

Posted on Updated on

When I first looked up Passenger, a Gumdam-like robot icon appears. From the site itself, “Battle-hardened by some of the most demanding web apps for over a decade, Passenger is considered one of the most performant, secure and mature app servers currently available.” (1)

OK so what is it? Passenger is an open source web application server which handles HTTP requests, manages processes and resources, and enables administration, monitoring and problem diagnosis. But then question is, if I type “Rails S” why would I need passenger? Passenger doesn’t replace “rails server” in the sense of it being a test platform only. What “rails server” does is it uses an application server – such as Passenger – directly to server ruby files.

The default web application server activated when you use “rails server” is Puma. Looking at the Gemfile, you can see “gem ‘puma’, ‘~> 3.7’”. I think this is a great thing to realize, a lot of new Rails developers need to not only know about how to write code in Ruby and Rails, but also about the ecosystem that runs a web application, instead if using Heroku as a crutch. Standing up a server on a standalone box can be a great learning experience. With Linux you would use Nginx/Apache to host pages, as they are web servers. They provide the HTTP transaction handling and serve static files. However, they are not Ruby application servers and don’t  run the Ruby applications by themselves directly. Nginx and Apache are used with an application server, such as Puma or Passenger. What application servers do is make it possible for Ruby apps to speak HTTP. Ruby apps (and frameworks like Rails) can’t do that by themselves. So we need the application servers so the ruby apps can talk to http. But application servers aren’t as good as Nginx and Apache at handling HTTP requests because Nginx and Apache are better at handling I/O security, HTTP concurrency management, connection timeouts, etc. This is why a good Rails Engineer will still need to know about Linux, Apache, and Nginx.

Why choose Passenger over Puma? Passenger’s strength is that is also can host Python, Node.js, and Meteor apps as well. Passenger is the best choice when you have apps developed in several languages that Passenger supports and you want to consolidate your app server stack. Puma’s strength is that it’s easy to configure and mostly “just works” out-of-the-box. I think that after getting familiar with what Puma does, its better to look at Passenger for growth cases