On a current project we are using Teamcity and Octopus for building and deploying our IIS app.
We have 4 environments. The CI Environment (automated build on checkin, runs unit tests, and automated QA tests), and the QA, UAT, and Prod Environments (which we push to manually using Octopus).
On local (dev) builds, the default build script pushes directly to a local Octopus instance for testing purposes.
Would it be better practice to have the CI build (which runs quite frequently) to follow a similar model as the local build (and push directly to a Tentacle instance, instead of through the master), or go through the Octopus server (requiring the creation of a new release every single time a build is made).
Your questions seem related but I don't think they are so I'll break this answer up into two pieces.
PART ONE
I'd say, depending on the size of your packages and the amount of time for each build (including automated unit/qa tests), you have two options:
I'd argue at a minimum deploying to your CI and QA environment. It only makes sense that the testers will test the latest deployment.
The reason I mention package size is because if you're doing 15-20 builds a day and generating 50MB+ packages with all the unit tests, you may end up queuing builds if you're not careful. Timeliness is a factor. If each check-in takes 20 minutes to build, nuget pack, push and deploy, that might be alright. But 20 minutes x N # of check-ins a day and you might be queuing builds by the middle of the day. Not to mention storage may (slowly) become an issue (or lack of storage). I don't know all the details about your project but just keep these factors in mind.
No matter what you want to do, you'll have to use the Octo.exe. If you want to do #1 (continuous deployment) you could write a simple Powershell script to execute the Octo.exe command AFTER the build steps and package is pushed to the nuget server of your choice. For the command, you'll just tell Octopus to get the latest package.
If you want to do the nightly build option, you'd do the same thing (script-wise) except instead of changing TeamCity to run another script, you'd schedule a task on your Windows server to run that script at a certain time. This would be the most prudent option, but it's not difficult to switch from this to the continuous deployment option.
PART TWO
As far as fetching the packages directly from the server or directly from the NuGet server, it should not matter. I would take a few things in mind that would guide you in picking one over the other.
Take your package size into consideration - the larger the package, I'd argue going to direct to the nuget server is preferable only out of concern for the Octopus IO load. Odds are, this won't be a big deal.
Number of servers per environment - especially if you have multiple servers in each environment. By default Octopus tries to do parallel deployments but you can switch to "rolling" deployments (setting a specific # of servers to be deployed to at a time). If you're doing continuous deployments for EVERY check-in, each tentacle will have to download the latest package. Again, depending on package size and number of tentacles to push to, you may run into some bandwidth issues. Again, I don't know how many servers you have in your environments so you really know what's best.
Are there any other teams using the Octopus Server? I ask only because if you're the only team then you really don't have to worry how each tentacle gets the package. Direct from the nuget server vs. the Octopus server really won't matter.
Here's the URL to the Octo.exe documentation: http://docs.octopusdeploy.com/pages/viewpage.action?pageId=360596 It's your best friend for fully automating your builds to whatever environments you want.
No matter what you choose, I'd highly recommend automating your deployments. Period. After you've automated it, you'll wonder why you bothered with manual deployments. Keep in mind, the octo.exe does not need to run on the octopus server itself!
The Octo.exe uses the Octopus API and communicates with said API for everything. So you can test out the Octo.exe command from any machine that can access your Octopus Server. It's best to try it out on your desktop and once you get it just right, then put it into a script and test it.
So to clarify to the OPs point of "when" a release needs to be created, it's highly subjective and since projects within Octopus can deploy one or more NuGet packages it's going to vary on a case by case basis. That said, I think there needs to be some acknowledgement of versioning your releases which inevitably brings your versioning of your binaries and Nuget packages into the arena as well.
Example: If your testers heavily rely on TFS changeset numbers then it's best have that changeset number embedded in your binaries (via AssemblyVersionInfo) and have your NuGet packages reflect that version (in your NuSpec) and then have Octopus use NuGet package versioning for your releases. Sweet. Your release versions can show your changeset number! Awesome. Well except when your project is deploying more than one NuGet package. So which package acts as the version for the entire deployment? Things get pretty sticky when you have more than one NuGet package per project and deployment process. That's why the other versioning mechanism in Octopus (aka variable template) works typically best for everyone.
Keep in mind the concept behind promotion within Octopus is also an important feature - especially if you're addressing best practices. It's best to promote a release between environments not only for consistency in the deployment process, NuGet package versions and variable values, but the version number (whether NuGet or variable template) being consistent across your environments. It's visually much easier to see release 1.0.2 across all environments and release that 1.0.2 has been thoroughly tested, rather than creating a new release for EACH deployment - which would look something like this: 1.0.1, 1.0.2, 1.0.3, 1.0.4, etc. especially when the code is the all the same. Promotions allow you to (effectively) redeploy a release to another environment with all the same NuGet package versions, variable settings and deployment process - all within the same release version. As a shameless self-plug, here's my blog post on this issue of versioning.
Are there best practices on "when" to create a release? I'd say for any code change, a new release is necessary. You don't need to create a new release for moving the same code to a different environment. But if you're concerned with releases, you should also be concerned with versioning as well.
To summarize (TL;DR):
This is all I can think of off the top of my head at the moment. Hope this clarifies things.