EDIT: As the question has changed in emphasis, I've edited my answer as follows.
Architecture and architect are heavily-overloaded terms. To start, you need to specify whether you're talking about a software company (where software is the product/service) or a line-of-business company (where software supports the product/service).
There is also the top-down view of architecture (what matters from the organisation viewpoint) versus the bottom-up view (what matters from the project requirements viewpoint).
In a large line-of-business company, architecture from the top-down (organisation) viewpoint is normally partitioned something like this:
- Domain architecture, sometimes called business architecture. For example, understanding commodities trading processes and the associated IT systems.
- Data architecture. For example, understanding descriptions of data in storage and data in motion; descriptions of data stores, data groups and data items; and mappings of those data artifacts to data qualities, applications, and locations.
- Technical architecture. For example, understanding the structure and behaviour of the technology infrastructure of an enterprise, solution or system.
My architecture areas from the bottom-up (requirements) viewpoint look something like this:
- Correct use of middleware - loose coupling, fault tolerance, target-specific transforms, killing point-to-point, etc.
- Identifying and engineering-out as many reconciliations as possible.
- Identifying and engineering-out as much dual-keying as possible.
- Identifying and engineering-out as many manual processes as possible.
- Identifying and engineering-out any end-user computing solutions - e.g. Access databases, Excel spreadsheets.
- Identifying and engineering-out any end-user editing of "the answer" - taking information after all work is completed, and then editing it.
- Investigating complete data lifecycle: who owns it, who enriches it, who distributes it, single version of the truth, removing reconciliations.
- Identifying performance and scalability metrics, testing risky areas against multiple data profiles.
- Identifying real-time versus batch processes and interfaces, and eliminating batch dependencies wherever feasible.
- Consolidation to single platform where possible, and single versus multiple instances.
- Ability to handle new vanilla business quickly, and new complex business within reasonable timescales.
- Identification of a clear support model, especially across regions where necessary.
- State maintenance and recovery - how well every-day processing and interface failures can be recovered.
- BCP/DR requirements and capabilities, general fault tolerance, WAN dependencies.
- Where can project risk be reduced?
- Security, end-user and developer access, ring of steel around production.
- What MI reporting facilities are in place?
- Emphasising simplicity as much as possible, system de-commissioning.