Thursday, July 21, 2011

App Engine 1.5.2 SDK Released

As the summer heat descends on the Northern Hemisphere we thought we’d release our newest App Engine version with some changes that are sure to keep you playing around in the cool, air-conditioned indoors (hey, you don’t want your computer to overheat, right?).

Production Changes

  • Adjustable Scheduler Parameters - As we previously discussed, we are introducing two scheduler knobs (okay, they actually look like sliders) that will allow you to control some of the parameters that influence how many Instances run your application. Today you will be able to set the minimum pending latency and maximum number of idle instances for your application.

Datastore Changes

  • Advanced Query Planning - We are removing the need for exploding indexes and reducing the custom index requirements for many queries. The SDK will suggest better indexes in several cases and an upcoming article will describe what further optimizations are possible.
  • Namespaced Datastore Stats - Now, in addition to getting overall datastore stats, we are providing a new option to query datastore stats per namespace.

Task Queue Changes

  • New Task Queue details page - We’ve revamped the Task Queue details page in the Administration Console to provide more information about the tasks being run. You can now see the headers included in the enqueued task, the payload, and information from previous task runs.
  • 1MB Pull Task Size - It’s our belief that there is only one way for size limits to go - and that’s up! So with this release we’ve increased the size for pull tasks to 1MB.
  • Pull queue lease modification - We’ve introduced a new method for Pull Queues that allows you to extend the lease on existing tasks if the initial lease on the task was insufficient.

Lastly, we have some exciting news related to the experimental Go runtime. While it still remains experimental, starting with 1.5.2, all HRD apps will have access to the Go runtime in production.

As always, there are also some small features and bug fixes, the full list of which can be found in our release notes (Python, Java). We look forward to your feedback and questions in our forum.

4 comments:

draw.io said...

The change adding Min Pending Latency is pretty huge, actually. This is a 1.6 release in feature terms.

The 3 reserved instance idea was, obviously, which was less than ideal in terms of scalability. This is beautifully simple, yet enough to describe when exactly you want to scale up (most importantly, in user experience terms).

The scaling down, Max Idle Instances, just doesn't seem so cute to me, however. It strikes me as going back to the absolute measuring of the reserved instances. And the best setting probably varies by time of day.

The number I pick for Max Idle Instances depends on how traffic I get, the boundary case of this being -> I have no clue. It's kinda like being able to set the number of reserved instances, above that number, like previous above 3 instance, the scalability degrades.

Isn't the scale down just the opposite of the scale up with some Hysteresis? i.e. "Min Pending Latency High" and "Min Pending Latency Low". Add a bit of smoothing to calculating what the current low is (and smooth the high calculation if you're not already).

Translate to:

Scale Up if the user isn't getting a response fast enough.
Scale Down if there's excess capacity and scaling down won't cause anyone to notice.

Atif said...

Google is still using cpu hours pricing model. How it will deal where user reserves maximum 100 instances for his application with the current pricing model.

Belvettina said...

good !
http://www.gocasa.ro

Anonymous said...

Guys, very much anticipating that "upcoming article" to explain just what you mean by "removing the need for exploding indexes".

I'm currently developing an app and having to go through lots of hoops to provide a 5-way search filter.

I tried searching without including list properties in indexes and that didn't work, so I've added indexes with multiple list properties for now, but I know this won't work in production because it will explode over 5000 real quick.