APM for Java

Total Page:16

File Type:pdf, Size:1020Kb

APM for Java APM for Java AppDynamics Pro Documentation Version 4.0.x Page 1 APM for Java . 4 Configure Java Monitoring . 4 Refine Business Transaction Discovery . 5 Java Web Application Entry Points . 9 Servlet Entry Points . 10 Automatic Naming for Servlet Business Transactions . 12 Custom Naming for Servlet Business Transactions . 20 Custom Expressions for Naming Business Transactions . 31 Advanced Servlet Transaction Detection Scenarios . 32 Struts Entry Points . 45 Web Service Entry Points . 47 POJO Entry Points . 49 Spring Bean Entry Points . 57 JMS Entry Points . 60 EJB Entry Points . 61 Binary Remoting Entry Points for Apache Thrift . 64 CometD Support . 67 Mule ESB Support . 68 JAX-RS Support . 69 Spring Integration Support . 70 Exclude Rule Examples for Java . 72 Configure Multithreaded Transactions for Java . 75 Configure End-to-End Message Transactions for Java . 77 Configure Backend Detection for Java . 79 Custom Exit Points for Java . 86 HTTP Exit Points for Java . 97 JDBC Exit Points for Java . 102 Message Queue Exit Points for Java . 107 Web Services Exit Points for Java . 113 Cassandra Exit Points for Java . 114 RMI Exit Points for Java . 115 Thrift Exit Points for Java . 116 Configure Memory Monitoring for Java . 116 Automatic Leak Detection for Java . 116 Object Instance Tracking for Java . 118 Custom Memory Structures for Java . 120 Configure Java Background Tasks . 127 Getter Chains in Java Configurations . 128 Instrumenting Java 8 Constructs . 134 Code Metric Information Points for Java . 139 Configure JMX Metrics from MBeans . 139 Create and Import or Export JMX Metric Configurations . 147 Exclude JMX Metrics . 149 Exclude MBean Attributes . 150 Configure JMX Without Transaction Monitoring . 151 Resolve JMX Configuration Issues . 151 MBean Getter Chains and Support for Boolean and String Attributes . 156 Monitor JVMs . 159 Troubleshoot Java Memory Leaks . 166 Troubleshoot Java Memory Thrash . 172 Monitor Java App Servers . 178 Monitor JMX MBeans . 179 Page 2 Detect Code Deadlocks for Java . 189 Trace Multithreaded Transactions for Java . 191 Troubleshoot Slow Response Times for Java . 197 Page 3 APM for Java Once your Java application environment is instrumented with the AppDynamics Java Agent, you can start monitoring Java application performance. For general information on configuring AppDynamics deployment, see the topics under AppDynamics Essentials. This section describes considerations specific for monitoring Java applications. It also describes how to fine tune the configuration for specific Java frameworks and technologies. These include, for example, monitoring and configuration topics for JVMs, Java application servers, and JMX. Get a PDF version of the Java Monitoring documentation. Learn by watching End-to-end APM for What's New in 4.0? Application Intelligence Java Platform: A Quick Tour A new directory structure to More videos... Java Supported enable drop in upgrades Environments Business transaction lock down Install a Controller or get a Cross-application correlation SaaS Controller New Framework support: Instrument Java WebMethods Applications Tibco Monitor JVMs SLF4J, Log4j 2, Logback Monitor Java App Servers instrumentation See more in the release notes Configure Java Monitoring After instrumenting your Java applications with the AppDynamics agent and connecting the agent to the AppDynamics Controller, you can start monitoring your application environment right away. The AppDynamics platform gives you insights into your environment right away. It automatically bu ilds a model of your environment, detects business transactions, and measures performance. Over the course of monitoring, however, it's likely that you will want to tweak and refine aspects of the default AppDynamics behavior, such as how your business transactions are modeled. You may also want to extend monitoring to processes that are not instrumented by default, such as background processes. AppDynamics gives you significant control over how these things are done. Configuring AppDynamics and, as described here, Java monitoring in particular, is often an iterative, on-going process, especially as new applications come online or as there are changes to your application environment. The Java monitoring topics covered here are: Copyright © AppDynamics 2012-2015 Page 4 Refine Business Transaction Discovery Java Web Application Entry Points Exclude Rule Examples for Java Configure Multithreaded Transactions for Java Configure Backend Detection for Java Configure Memory Monitoring for Java Configure Java Background Tasks Getter Chains in Java Configurations Instrumenting Java 8 Constructs Code Metric Information Points for Java Configure JMX Metrics from MBeans See also, Java Agent Configuration Properties Refine Business Transaction Discovery On this page: Start a Transaction Discovery Session Working with the Transaction Discovery Tool Rate this page: The default transaction discovery capabilities in the AppDynamics platform can get you up and running quickly. However, in most cases, you will want to tune and refine how business transaction are discovered. This ensures that the business transactions you are monitoring are those that reflect the traffic that is most important to you. The Transaction Discovery tool is designed to help with this. It lets you refine business transaction discovery. During a Business Transaction Discovery session, you can see the effect of the changes you make to the configuration without having to apply it to the active configuration. When you've finished making your changes, you can then choose which tiers of the application to apply the tuned configuration to or not apply the changes at all. Start a Transaction Discovery Session 1. From the left navigation menu, click Configure->Transaction Discovery and then click Sta rt Discovery Session. 2. Select the nodes for which you want to configure business transaction discovery and then click OK. You can choose to run the discovery session on more than one node, using Shift-click to select successive nodes, but all the nodes selectect must be within the same tier. At the end of the configuration process you can apply your transaction detection configuration changes to one or more tiers. AppDynamics starts discovering transactions and displays them as soon as they are detected. 3. Copyright © AppDynamics 2012-2015 Page 5 3. During discovery, you can use other features of AppDynamics while the discovery process continues in the background. You may want to do this in a busy environment where many transactions are happening. You can give AppDynamics a few minutes to complete the discovery and then return to the Transaction Discovery page by clicking Configure->Transa ction Discovery. As discovery occurs, the Transaction Discover page lists the transactions. A transaction summary shows how many transactions were found by type, as follows: From the list you can determine whether there are some transactions that are not being monitored. For these, you can create a custom match rule to identify those transaction types on the Edit Discovery Config tab. For transactions that you don't want to trace, create a custom exclude rule on Edit Discovery Config tab. The page show the list of transactions, their type, discovery timestamp (when first detected), load (total number of calls of the business transaction since the discovery timestamp), and more. Working with the Transaction Discovery Tool While your transaction discovery session is running, you can perform the following operations. Stop Session Clicking Stop Session exits the Business Transaction Discovery tool without saving any of the changes you made Reset Data Clicking Reset Data clears the window and initiates a new business transaction discovery session. The information displayed in the previous discovery session is not saved. View Snapshot A snapshot of the transaction provides you with details about the transaction that can help you decide whether or not you want to monitor this transaction type or any of the methods executed by the transaction. 1. Copyright © AppDynamics 2012-2015 Page 6 1. On the Transaction Discovery window, select a transaction for which transaction snapshots are supported. For example, AppDynamics doesn't take snapshots of Struts Action transactions but it does for Servlet transactions. Click View Snapshot The snapshot window appears. It will be empty at first but then will fill up as snapshots are taken. It will continue to take snapshots until you stop the discovery session. Click Clear Snapshots to restart the snapshot session and delete details of snapshots taken in the previous session. 2. Click a snapshot and then click View Snapshot to view the captured details of the transaction. The information displayed on this page can help you create transaction match and exclude rules. After viewing the details of that snapshot you can click the breadcrumb to return to the snapshots page where you can choose to view the details of another snapshot. Use the Class/Method Browser The Class/Method Browser allows you to filter the classes of the transactions that have been discovered for only those classes that meet specific criteria. Knowing which classes and methods are being traced can help you create custom match or exclude rules. such as by providing you with information you could use to complete the fields on the Split Transactions Using Payload tab of the New Business Transaction Match Rule for a Servlet. The information presented here can also help you identify transactions that are not being traced, in which case you would create a custom match rule to include transactions of that type. To filter the classes shown.
Recommended publications
  • Unravel Data Systems Version 4.5
    UNRAVEL DATA SYSTEMS VERSION 4.5 Component name Component version name License names jQuery 1.8.2 MIT License Apache Tomcat 5.5.23 Apache License 2.0 Tachyon Project POM 0.8.2 Apache License 2.0 Apache Directory LDAP API Model 1.0.0-M20 Apache License 2.0 apache/incubator-heron 0.16.5.1 Apache License 2.0 Maven Plugin API 3.0.4 Apache License 2.0 ApacheDS Authentication Interceptor 2.0.0-M15 Apache License 2.0 Apache Directory LDAP API Extras ACI 1.0.0-M20 Apache License 2.0 Apache HttpComponents Core 4.3.3 Apache License 2.0 Spark Project Tags 2.0.0-preview Apache License 2.0 Curator Testing 3.3.0 Apache License 2.0 Apache HttpComponents Core 4.4.5 Apache License 2.0 Apache Commons Daemon 1.0.15 Apache License 2.0 classworlds 2.4 Apache License 2.0 abego TreeLayout Core 1.0.1 BSD 3-clause "New" or "Revised" License jackson-core 2.8.6 Apache License 2.0 Lucene Join 6.6.1 Apache License 2.0 Apache Commons CLI 1.3-cloudera-pre-r1439998 Apache License 2.0 hive-apache 0.5 Apache License 2.0 scala-parser-combinators 1.0.4 BSD 3-clause "New" or "Revised" License com.springsource.javax.xml.bind 2.1.7 Common Development and Distribution License 1.0 SnakeYAML 1.15 Apache License 2.0 JUnit 4.12 Common Public License 1.0 ApacheDS Protocol Kerberos 2.0.0-M12 Apache License 2.0 Apache Groovy 2.4.6 Apache License 2.0 JGraphT - Core 1.2.0 (GNU Lesser General Public License v2.1 or later AND Eclipse Public License 1.0) chill-java 0.5.0 Apache License 2.0 Apache Commons Logging 1.2 Apache License 2.0 OpenCensus 0.12.3 Apache License 2.0 ApacheDS Protocol
    [Show full text]
  • Apache Cassandra on AWS Whitepaper
    Apache Cassandra on AWS Guidelines and Best Practices January 2016 Amazon Web Services – Apache Cassandra on AWS January 2016 © 2016, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes only. It represents AWS’s current product offerings and practices as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services, each of which is provided “as is” without warranty of any kind, whether express or implied. This document does not create any warranties, representations, contractual commitments, conditions or assurances from AWS, its affiliates, suppliers or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. Page 2 of 52 Amazon Web Services – Apache Cassandra on AWS January 2016 Notices 2 Abstract 4 Introduction 4 NoSQL on AWS 5 Cassandra: A Brief Introduction 6 Cassandra: Key Terms and Concepts 6 Write Request Flow 8 Compaction 11 Read Request Flow 11 Cassandra: Resource Requirements 14 Storage and IO Requirements 14 Network Requirements 15 Memory Requirements 15 CPU Requirements 15 Planning Cassandra Clusters on AWS 16 Planning Regions and Availability Zones 16 Planning an Amazon Virtual Private Cloud 18 Planning Elastic Network Interfaces 19 Planning
    [Show full text]
  • Apache Cassandra and Apache Spark Integration a Detailed Implementation
    Apache Cassandra and Apache Spark Integration A detailed implementation Integrated Media Systems Center USC Spring 2015 Supervisor Dr. Cyrus Shahabi Student Name Stripelis Dimitrios 1 Contents 1. Introduction 2. Apache Cassandra Overview 3. Apache Cassandra Production Development 4. Apache Cassandra Running Requirements 5. Apache Cassandra Read/Write Requests using the Python API 6. Types of Cassandra Queries 7. Apache Spark Overview 8. Building the Spark Project 9. Spark Nodes Configuration 10. Building the Spark Cassandra Integration 11. Running the Spark-Cassandra Shell 12. Summary 2 1. Introduction This paper can be used as a reference guide for a detailed technical implementation of Apache Spark v. 1.2.1 and Apache Cassandra v. 2.0.13. The integration of both systems was deployed on Google Cloud servers using the RHEL operating system. The same guidelines can be easily applied to other operating systems (Linux based) as well with insignificant changes. Cluster Requirements: Software Java 1.7+ installed Python 2.7+ installed Ports A number of at least 7 ports in each node of the cluster must be constantly opened. For Apache Cassandra the following ports are the default ones and must be opened securely: 9042 - Cassandra native transport for clients 9160 - Cassandra Port for listening for clients 7000 - Cassandra TCP port for commands and data 7199 - JMX Port Cassandra For Apache Spark any 4 random ports should be also opened and secured, excluding ports 8080 and 4040 which are used by default from apache Spark for creating the Web UI of each application. It is highly advisable that one of the four random ports should be the port 7077, because it is the default port used by the Spark Master listening service.
    [Show full text]
  • The Programmer's Guide to Apache Thrift MEAP
    MEAP Edition Manning Early Access Program The Programmer’s Guide to Apache Thrift Version 5 Copyright 2013 Manning Publications For more information on this and other Manning titles go to www.manning.com ©Manning Publications Co. We welcome reader comments about anything in the manuscript - other than typos and other simple mistakes. These will be cleaned up during production of the book by copyeditors and proofreaders. http://www.manning-sandbox.com/forum.jspa?forumID=873 Licensed to Daniel Gavrila <[email protected]> Welcome Hello and welcome to the third MEAP update for The Programmer’s Guide to Apache Thrift. This update adds Chapter 7, Designing and Serializing User Defined Types. This latest chapter is the first of the application layer chapters in Part 2. Chapters 3, 4 and 5 cover transports, error handling and protocols respectively. These chapters describe the foundational elements of Apache Thrift. Chapter 6 describes Apache Thrift IDL in depth, introducing the tools which enable us to describe data types and services in IDL. Chapters 7 through 9 bring these concepts into action, covering the three key applications areas of Apache Thrift in turn: User Defined Types (UDTs), Services and Servers. Chapter 7 introduces Apache Thrift IDL UDTs and provides insight into the critical role played by interface evolution in quality type design. Using IDL to effectively describe cross language types greatly simplifies the transmission of common data structures over messaging systems and other generic communications interfaces. Chapter 7 demonstrates the process of serializing types for use with external interfaces, disk I/O and in combination with Apache Thrift transport layer compression.
    [Show full text]
  • HDP 3.1.4 Release Notes Date of Publish: 2019-08-26
    Release Notes 3 HDP 3.1.4 Release Notes Date of Publish: 2019-08-26 https://docs.hortonworks.com Release Notes | Contents | ii Contents HDP 3.1.4 Release Notes..........................................................................................4 Component Versions.................................................................................................4 Descriptions of New Features..................................................................................5 Deprecation Notices.................................................................................................. 6 Terminology.......................................................................................................................................................... 6 Removed Components and Product Capabilities.................................................................................................6 Testing Unsupported Features................................................................................ 6 Descriptions of the Latest Technical Preview Features.......................................................................................7 Upgrading to HDP 3.1.4...........................................................................................7 Behavioral Changes.................................................................................................. 7 Apache Patch Information.....................................................................................11 Accumulo...........................................................................................................................................................
    [Show full text]
  • Kyuubi Release 1.3.0 Kent
    Kyuubi Release 1.3.0 Kent Yao Sep 30, 2021 USAGE GUIDE 1 Multi-tenancy 3 2 Ease of Use 5 3 Run Anywhere 7 4 High Performance 9 5 Authentication & Authorization 11 6 High Availability 13 6.1 Quick Start................................................ 13 6.2 Deploying Kyuubi............................................ 47 6.3 Kyuubi Security Overview........................................ 76 6.4 Client Documentation.......................................... 80 6.5 Integrations................................................ 82 6.6 Monitoring................................................ 87 6.7 SQL References............................................. 94 6.8 Tools................................................... 98 6.9 Overview................................................. 101 6.10 Develop Tools.............................................. 113 6.11 Community................................................ 120 6.12 Appendixes................................................ 128 i ii Kyuubi, Release 1.3.0 Kyuubi™ is a unified multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown in the above figure, with each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg to build and manage Data Lake with pure SQL for both data processing e.g. ETL, and analytics e.g. BI. All workloads can be done on one platform, using one copy of data, with one SQL interface. Kyuubi provides the following features: USAGE GUIDE 1 Kyuubi, Release 1.3.0 2 USAGE GUIDE CHAPTER ONE MULTI-TENANCY Kyuubi supports the end-to-end multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server already exists. 1. Supports multi-client concurrency and authentication 2. Supports one Spark application per account(SPA). 3. Supports QUEUE/NAMESPACE Access Control Lists (ACL) 4.
    [Show full text]
  • Implementing Replication for Predictability Within Apache Thrift Jianwei Tu the Ohio State University [email protected]
    Implementing Replication for Predictability within Apache Thrift Jianwei Tu The Ohio State University [email protected] ABSTRACT have a large number of packets. A study indicated that about Interactive applications, such as search, social networking and 0.02% of all flows contributed more than 59.3% of the total retail, hosted in cloud data center generate large quantities of traffic volume [1]. TCP is the dominating transport protocol small workloads that require extremely low median and tail used in data center. However, the performance for short flows latency in order to provide soft real-time performance to users. in TCP is very poor: although in theory they can be finished These small workloads are known as short TCP flows. in 10-20 microseconds with 1G or 10G interconnects, the However, these short TCP flows experience long latencies actual flow completion time (FCT) is as high as tens of due in part to large workloads consuming most available milliseconds [2]. This is due in part to long flows consuming buffer in the switches. Imperfect routing algorithm such as some or all of the available buffers in the switches [3]. ECMP makes the matter even worse. We propose a transport Imperfect routing algorithms such as ECMP makes the matter mechanism using replication for predictability to achieve low even worse. State of the art forwarding in enterprise and data flow completion time (FCT) for short TCP flows. We center environment uses ECMP to statically direct flows implement replication for predictability within Apache Thrift across available paths using flow hashing. It doesn’t account transport layer that replicates each short TCP flow and sends for either current network utilization or flow size, and may out identical packets for both flows, then utilizes the first flow direct many long flows to the same path causing flash that finishes the transfer.
    [Show full text]
  • Chapter 2 Introduction to Big Data Technology
    Chapter 2 Introduction to Big data Technology Bilal Abu-Salih1, Pornpit Wongthongtham2 Dengya Zhu3 , Kit Yan Chan3 , Amit Rudra3 1The University of Jordan 2 The University of Western Australia 3 Curtin University Abstract: Big data is no more “all just hype” but widely applied in nearly all aspects of our business, governments, and organizations with the technology stack of AI. Its influences are far beyond a simple technique innovation but involves all rears in the world. This chapter will first have historical review of big data; followed by discussion of characteristics of big data, i.e. from the 3V’s to up 10V’s of big data. The chapter then introduces technology stacks for an organization to build a big data application, from infrastructure/platform/ecosystem to constructional units and components. Finally, we provide some big data online resources for reference. Keywords Big data, 3V of Big data, Cloud Computing, Data Lake, Enterprise Data Centre, PaaS, IaaS, SaaS, Hadoop, Spark, HBase, Information retrieval, Solr 2.1 Introduction The ability to exploit the ever-growing amounts of business-related data will al- low to comprehend what is emerging in the world. In this context, Big Data is one of the current major buzzwords [1]. Big Data (BD) is the technical term used in reference to the vast quantity of heterogeneous datasets which are created and spread rapidly, and for which the conventional techniques used to process, analyse, retrieve, store and visualise such massive sets of data are now unsuitable and inad- equate. This can be seen in many areas such as sensor-generated data, social media, uploading and downloading of digital media.
    [Show full text]
  • Why Migrate from Mysql to Cassandra?
    Why Migrate from MySQL to Cassandra? 1 Table of Contents Abstract ....................................................................................................................................................................................... 3 Introduction ............................................................................................................................................................................... 3 Why Stay with MySQL ........................................................................................................................................................... 3 Why Migrate from MySQL ................................................................................................................................................... 4 Architectural Limitations ........................................................................................................................................... 5 Data Model Limitations ............................................................................................................................................... 5 Scalability and Performance Limitations ............................................................................................................ 5 Why Migrate from MySQL ................................................................................................................................................... 6 A Technical Overview of Cassandra .....................................................................................................................
    [Show full text]
  • Apache Cassandra™ Architecture Inside Datastax Distribution of Apache Cassandra™
    Apache Cassandra™ Architecture Inside DataStax Distribution of Apache Cassandra™ Inside DataStax Distribution of Apache Cassandra TABLE OF CONTENTS TABLE OF CONTENTS ......................................................................................................... 2 INTRODUCTION .................................................................................................................. 3 MOTIVATIONS FOR CASSANDRA ........................................................................................ 3 Dramatic changes in data management ....................................................................... 3 NoSQL databases ...................................................................................................... 3 About Cassandra ....................................................................................................... 4 WHERE CASSANDRA EXCELS ............................................................................................. 4 ARCHITECTURAL OVERVIEW .............................................................................................. 5 Highlights .................................................................................................................. 5 Cluster topology ......................................................................................................... 5 Logical ring structure .................................................................................................. 6 Queries, cluster-level replication ................................................................................
    [Show full text]
  • Pentaho EMR46 SHIM 7.1.0.0 Open Source Software Packages
    Pentaho EMR46 SHIM 7.1.0.0 Open Source Software Packages Contact Information: Project Manager Pentaho EMR46 SHIM Hitachi Vantara Corporation 2535 Augustine Drive Santa Clara, California 95054 Name of Product/Product Version License Component An open source Java toolkit for 0.9.0 Apache License Version 2.0 Amazon S3 AOP Alliance (Java/J2EE AOP 1.0 Public Domain standard) Apache Commons BeanUtils 1.9.3 Apache License Version 2.0 Apache Commons CLI 1.2 Apache License Version 2.0 Apache Commons Daemon 1.0.13 Apache License Version 2.0 Apache Commons Exec 1.2 Apache License Version 2.0 Apache Commons Lang 2.6 Apache License Version 2.0 Apache Directory API ASN.1 API 1.0.0-M20 Apache License Version 2.0 Apache Directory LDAP API Utilities 1.0.0-M20 Apache License Version 2.0 Apache Hadoop Amazon Web 2.7.2 Apache License Version 2.0 Services support Apache Hadoop Annotations 2.7.2 Apache License Version 2.0 Name of Product/Product Version License Component Apache Hadoop Auth 2.7.2 Apache License Version 2.0 Apache Hadoop Common - 2.7.2 Apache License Version 2.0 org.apache.hadoop:hadoop-common Apache Hadoop HDFS 2.7.2 Apache License Version 2.0 Apache HBase - Client 1.2.0 Apache License Version 2.0 Apache HBase - Common 1.2.0 Apache License Version 2.0 Apache HBase - Hadoop 1.2.0 Apache License Version 2.0 Compatibility Apache HBase - Protocol 1.2.0 Apache License Version 2.0 Apache HBase - Server 1.2.0 Apache License Version 2.0 Apache HBase - Thrift - 1.2.0 Apache License Version 2.0 org.apache.hbase:hbase-thrift Apache HttpComponents Core
    [Show full text]
  • Plugin Tapestry ​
    PlugIn Tapestry ​ Autor @picodotdev https://picodotdev.github.io/blog-bitix/ 2019 1.4.2 5.4 A tod@s l@s programador@s que en su trabajo no pueden usar el framework, librería o lenguaje que quisieran. Y a las que se divierten programando y aprendiendo hasta altas horas de la madrugada. Non gogoa, han zangoa Hecho con un esfuerzo en tiempo considerable con una buena cantidad de software libre y más ilusión en una región llamada Euskadi. PlugIn Tapestry: Desarrollo de aplicaciones y páginas web con Apache Tapestry @picodotdev 2014 - 2019 2 Prefacio Empecé El blog de pico.dev y unos años más tarde Blog Bitix con el objetivo de poder aprender y compartir el conocimiento de muchas cosas que me interesaban desde la programación y el software libre hasta análisis de los productos tecnológicos que caen en mis manos. Las del ámbito de la programación creo que usándolas pueden resolver en muchos casos los problemas típicos de las aplicaciones web y que encuentro en el día a día en mi trabajo como desarrollador. Sin embargo, por distintas circunstancias ya sean propias del cliente, la empresa o las personas es habitual que solo me sirvan meramente como satisfacción de adquirir conocimientos. Hasta el día de hoy una de ellas es el tema del que trata este libro, Apache Tapestry. Para escribir en el blog solo dependo de mí y de ninguna otra circunstancia salvo mi tiempo personal, es com- pletamente mío con lo que puedo hacer lo que quiera con él y no tengo ninguna limitación para escribir y usar cualquier herramienta, aunque en un principio solo sea para hacer un ejemplo muy sencillo, en el momento que llegue la oportunidad quizá me sirva para aplicarlo a un proyecto real.
    [Show full text]